This project is a continuation of an earlier one on Fortran/Ada conversion. its purpose is to enable Bath to experiment with a general-purpose transformer in combination with a Perq display to discover what program transformations are most useful to practising programmers in various contexts.
The short term objectives of the project include the design of a high-level language conversion system using internal tree representation.
Medium term objectives are to identify program transformations useful in such systems.
In the long term the group aims to identify kinds of program transformation tool most useful to programmers working interactively via bit-mapped display.
The earlier system as described in the published paper [1] is now available both on Multics and on Perq under POS. The Karlsruhe GAG system [2] has also been transferred to a Perq under POS for use as a tool for the generation of attributed trees.
A general Purpose Transformer for IBIS using attribute grammar is currently being designed.
The work is likely to find its first applications in selecting transformations to be incorporated in high-level language conversion systems. Discussions with potential Alvey partners are in progress.
Contact - Dr P J L Wallis
This project is concerned with
The tool will be based on a verification condition generator for a Pascal-based language developed at QUB by M A Malik as part of his Ph.D. project.
The primary short-term goals of the project are to provide
The long-term goals are to investigate how appropriate are the techniques developed for the correct construction of large software products and perhaps completely mechanise the verification process.
Work on the project commenced on 1 July 1983. An investigation of the use of effect clauses to capture the semantics of procedure and function calls, iterative constructs and constructs with abnormal exits was undertaken. The use of these clauses as an annotational mechanism is closely related to the decision that the specification language should be functional. A paper on this aspect of the work is currently being prepared. Malik's system has been extended to accommodate this method of annotation and proof rules have been incorporated to deal with further statement and data types.
Drs R and T Nicholl have resigned from their posts as research assistants on the project on 31 December 1983 to take up lectureships at the New University of Ulster. Replacements are currently being sought. However the Nicholl's are continuing to contribute to the project and an investigation of the specification of programs in terms of effects on abstract data types is under way. Extension of the verification condition generator is continuing. Dr Gray of QUB, whose main interest is in functional programming, is giving helpful advice.
It is hoped that the software tools will prove to be valuable in educational establishments where a formal approach to the development of programs is taken and that the study of the collection of case studies will allow novice programmers to acquire and develop the skills necessary for the successful pursuit of their discipline.
For further information please contact Dr M Clint.
Micro-PSL is a software tool which is used during the requirements specification stage of the Information System Life Cycle. It provides computer assistance to check the consistency and completeness of a System Specification before embarking on the subsequent Design and Implementation phases.
Micro-PSL is a micro/mini computer version of the large mainframe PSL/PSA system developed by the ISDOS group at the University of Michigan, USA.
The primary short-term goal has been to develop a version of Micro-PSL for the PERQ running under PNX, which would include the complete PSL (Problem Statement Language) used by the Michigan PSL/PSA system. The first PERQ version has been designed to accept Information Systems Requirements specified in PSL, check them for consistency and completeness against a database of previously defined requirements and subsequently provide detailed analysis of the contents of the requirements database.
All input is keyboard oriented and all output is lineprinter oriented. The long-term goals of the project are to define and develop tools, like Micro-PSL, which are more user friendly (ie. graphical input/output, voice input), and also to develop tools which will assist in the prototyping of systems (eg. systems defined in PSL could be prototyped in Ada).
A preliminary version of Micro-PSL has been implemented on the PERQ under PNX which accepts 95% of the language PSL from ISDOS. The ISDOS group has been approached for their permission to release the software to participants in the SERC Software Technology Initiative since PSL/PSA is a proprietary product. Such permission is expected shortly, at which time Micro-PSL will be available to interested parties.
Investigation of a graphical front-end to Micro-PSL is now in progress. A detailed specification of such a front-end to be implemented on the PERQ should be completed be the end of April. Prototyping of systems defined in PSL by automatic translation from PSL to Ada is also under investigation.
Micro-PSL has been used to define Business Information System Requirements both within the Computer Science Department and in a limited number of external commercial systems. Its use need not be limited to commercial systems, however. (PSL/PSA is used in commercial, military and industrial environments to define requirements for a great variety of types of system).
For further information please contact the following people: Dr R J Thomas or Mr J A Kirkham.
It is essential during the development and testing of a compiler to have available a good set of test programs. The purpose of this project is to construct an automatic generator of test programs to complement existing hand-coded ones.
A short term goal is to construct a suitable specification language for expressing the input to the automatic generator. This language is based on a grammar using attributes. The aim is to enable the generator to construct programs which not only compile, but also execute meaningfully.
The long term goals involve constructing a suite of software tools, written in Pascal and running under Unix, which can be used to test particular parts of a compiler.
After some early experiments using a simple BNF type description of the grammar and a generator driven by a random number generator, a preliminary version of the specification language has been designed.
A pre-processor is currently being implemented which will convert the input, written in the specification language, to a set of tables to drive the generator. Some of the main ideas for the generator have already been tried out during the preliminary trials and it is hoped very soon to start work on the generator itself.
The final product may be of use to anyone interested in the production and testing of compilers. Most of the work initially will be based on testing aspects of Pascal compilers, but the suite will be designed to be applicable to a wide variety of programming languages.
For further information contact either Dr C J Burgess or Dr C L James.
This project involves the design and construction of a programming environment built around a novel programming language. This language, called Poly, allows objects such as editors and compilers to be given types, and uses of them to be type-checked. A persistent storage system is used to provide permanent storage for objects so that there is no need for converting between a disc storage form and a main memory form. The research is being carried out by D C J Matthews; M J C Gordon is the grant-holder.
The long term aim is to develop an integrated environment in which all objects have a type and all operations are type checked. The immediate goals involve completing the persistent storage system which will be used as the basis of a filing system for the environment. The next stage is to develop some of the tools, such as a structure editor, to operate on objects in the environment.
Cambridge have a working compiler written in Poly which is used to create new objects in the persistent storage system. The persistent storage system itself is nearly complete, requiring only a transaction system to ensure consistency.
The transaction system for the persistent store is being developed and it is being tuned to improve performance. A typed filing system is being constructed using a system of nested environments (set of name-object pairs).
A non-persistent version of the system is available for use. Cambridge expect to be able to release a version based on the persistent storage system soon. Although Poly is a single-language system, Cambridge expect the ideas to be applicable to integrated environments supporting multiple languages.
For further information contact D C J Matthews (dcjm@camjenny), Cambridge University Computer Laboratory, Corn Exchange Street, Cambridge, CB2 3QG.
To provide practica1 support (on Acorn 68000 system) for the high performance functional programming language Ponder [1] (Polymorphic data types, normal order evaluation). The research being carried out by J Fairbairn and S Wray; M J C Gordon is the grant holder.
To show that, for a wide variety of tasks, industrial scale functional programs are not only possible, but clearer and more reliable than imperative ones.
An implementation of Ponder has been produced which runs at about 7000 functions calls a second on the 68000 [2]. This is around the same speed as compiled Lisp.
An interactive colour graphics interface (with a mouse) is under development. Further improvements to the implementation are in progress.
The Ponder system is a research tool, not a product. Nevertheless its current performance is comparable to commercially available imperative languages.
Contact Jon Fairbairn or Stuart Wray at the Computer Laboratory, Cambridge.
Both available from Jon Fairbairn and Stuart Wray above.
One of the major problems in the design of a programming environment is the representation of structural links between the several representations of a program module - its source code, binary, specification, etc., - and maintaining consistency between these representations as the various modules in a program are written and edited. To this end Cambridge have developed the entity system as a kernel to sustain the development of a variety of programming environments.
Entities are data types for representing information about program modules. An entity has a base type which includes its storage class. The latter describes how the entity is to be stored on disc. One entity may keep links to other entities in its stored representation, typically to describe the dependency relationships between modules. In addition to the base class, an entity may define a series of projection classes. These serve to define externally accessible forms of an entity, derived from the data stored in the base class. Thus a projection class is a list of procedures that can be invoked on the entity. When a software tool wishes to access some part of an entity it must approach the kernel giving the name of the entity and a perspective class. The perspective class will be a list of procedures the tool wishes to invoke, and the data types of the arguments and results of those procedures. The kernel will attempt to find a path through the projection procedures of the entity to construct a set of procedures that can satisfy the perspective. Typically, projections correspond to functions such as compiling and linking that convert between the various representations of a program module.
Projections are a very powerful and general mechanism. With them the entity system can describe the rules governing bindings between versions of program modules, specifications, edit histories and documentation. Procedures can be defined that implement entity naming schemes to provide directories or catalogues to help the user move around the pieces of a large program. The checking implied by the use of data types gives a high degree of integrity by ensuring that tools and data are compatible. Data types include time-stamps so that multiple or revised implementations of tools can be distinguished and automatically correlated with the stored form of entities on which they operate. The results of projection can be cached to improve the speed of access to frequently accessed components. The cached results can be held in volatile storage, since they can always be recomputed by projection form the permanently stored base classes.
A prototype entity system has been written in Modula 2 for a VAX-11/750 running Berkeley 4.1 Unix. A first cut at a programming environment implementation supporting the Unix Modula 2 toolset is in its early stages, using the prototype entity system. Subsequent plans include the reimplementation of a revised kernel within the Unix operating system for greater efficiency and the writing of programming environments with tools that exploit the facilities of the entity system to its full power. In the first instance this will be done for Modula 2 and then for CLU in order to demonstrate that the kernel can support multiple languages.
A large bibliography of papers and articles on programming environments and related issues has been collected online for access via the Unix refer-package.
The project is presently staffed by the grant holder, one R.A. and a SERC-supported research student.
For further information on either project, please contact Dr A J Herbert at the address above or by SERC mail to HERBERT@CAGA.
This project seeks to investigate the use of personal computers (namely PERQs as programmer workstations linked by local area network to a central programming environment service. The main thrust of the research is to split the imp1ementation of the programming environment and its software toolset so that highly interactive tasks such as editing and debugging are carried out in the local terminal, while data storage is carried out at a central point. In this respect. the programming environment server is reminiscent of the file servers found in other distributed systems.
This approach simplifies the problems of data management and interlock than would be the case if programmers had replicated copies of the full programming environment stored at their workstation. On the other hand, a central service is not well placed to meet the needs of highly interactive programs such as editors and debuggers, so Cambridge choose to run these locally at the workstation. Cambridge treat the local disc (if any) at a workstation as a volatile cache and rely on the server for permanent storage. Some functions such as compilation could be carried out locally in the terminal, at the server. or even on some other machine allocated dynamically for the purpose.
In addition to the distributed aspects of this research Cambridge also intend to devote some time to exploiting the capabilities of bit-mapped displays for presenting information to the programmer and providing control through pointing and selection actions using a mouse.
Cambridge propose to use the entity system described above as the basis for the central server and to treat projections as the units of distribution. In the early stages of the project Cambridge intend to focus on distributing individual tools, before going on to a more general mechanism.
The project is currently staffed by the grantholder and one RA.
For further information on either project, please contact Dr A J Herbert at the address above or by SERC mail to HERBERT@CAGA.
The research deals with methods of producing programs which are guaranteed to meet their specifications, aiming to avoid the high cost of bugs in software produced by traditional methods. This has been a major concern of mathematical work on the theory of programming in the last 15 years and is now recognised as a commercial reality for the late 1980s.
Work on modularity and behavioural equivalence (Sannella and Schoett), institutions and liberality (Tarlecki), a method of proving that a logic is an institution using free algebra semantics (Burstall and Goguen SRI). Thinking about mechanical verification systems, esp. LCF and Cornell's PRL which Edinburgh have just got running on VAX 750, continuing interest in developing ML, especially for modularity.
There has been considerable interest in Edinburgh's work on specification and in ML (support from ICL). BP have supported Burstall with a one-year Research Fellowship. Consultancy to ICL, IBM, Xerox PARC, SRI.
Contact: Mrs. D. McKie for Deptl. Research Reports, Miss E.Kerse for other publications, Professor R. Burstall for technical matters, Dr. J. Scott about ML system.
SERC net address:R.M.Burstall@EDXA (soon to move to VAX).
The grant is to support a computer, and the salary of. a computer manager, to support software engineering research in the grant-holders' projects, The official start date was April 1, 1984.
With the help of George Cleland, CS Department Manager, a computer configuration was soon decided, and permission to purchase was granted by SERC on November 7. This machine was installed in March this year. its current configuration is:
It runs Berkeley Unix BSD 4.2. Both the hardware and software have provided reliable and initial problems experienced with the York X25 networking software have now been remedied. A user population of about 30 is supported with typically 5 to 15 users on the machine concurrently.
Dr G.D.M. Ross was appointed as Manager/Systems Programmer (full-time) from June 1, 1984.
The Data Curator group, under the direction of Malcolm Atkinson, are working, in collaboration with the University of St. Andrews, on a portable Unix implementation of PS-algol.
A number of systems, notably LCF, PRL, Unix ML have been imported on to the machine and are being actively used by the Theory of Computation group at Edinburgh. A port of Edinburgh ML from VAX/VMS is also being carried out.
The general aim of this project is to develop and investigate various proof methodologies for concurrent systems.
A proof methodology must be based on a sound semantic theory of processes. One goal of the project is to formulate such theories and investigate their significance. Edinburgh have two candidates, called bisimulation equivalence and testing equivalence. Both theories lead to different notions of the behaviour of systems and consequently lead to different proof techniques. Edinburgh plan to apply these to medium scale examples taken from the fields of protocol specification and VLSI systems. To help in this endeavour Edinburgh plan to implement a workbench to help people use these proof techniques and more generally help them to evaluate their ideas of what proof techniques are more suited to specific problem domains.
Implementing ML on the Perq has turned out to be very difficult due to limitations in the operating system which were not initially apparent, and for the moment has been abandoned. On the other hand, the attempt has led to a version of ML in ML and Edinburgh have started to provide a more sophisticated user environment for the language. Edinburgh have recently written a CCS simulator in ML and plan to augment it, giving the user more control over the execution of specifications. Some progress has been made at the theoretical level and these are reported in the references below.
Please contact either M. Hennessy or K. Mitchell, Dept of Computer Science, University of Edinburgh.
The project aims to extend the study of LCF, a system for conducting formal proofs with computer assistance, both by application studies and by extending its methodology (based upon the use of proof tactics). More generally, it aims to test the hypothesis that a functional programming language forms a good medium for machine-assisted formal studies in computing.
David Schmidt left the project on December 1, 1983, to resume his teaching post at Kansas State University.
Larry Paulson took up a teaching post at Cambridge University on October 1, 1983, but remains actively involved with the project.
Lincoln Wallen, from the Artificial Intelligence Department at Edinburgh joined the project for one year starting April 1, 1984.
Michael Gordon has been verifying hardware. His Logic for Sequential Machines (LSM) [l] extends the logic of LCF to handle bit-strings and communication lines; a logical term can then stand for a device with many inputs and outputs, and a binding mechanism specifies the connections among devices. In particular Gordon has shown [2] that a simple l6-bit computer, with ALU, memory, registers and microcode controller, meets its specification.
David Schmidt devoted most of his efforts to understanding different styles of tactical (= goal-directed) proof. In particular [3] he showed how, in a particular problem domain, interactive proofs are more naturally constructed if the axioms of that domain are presented not as axioms, but as inference rules; this especially eases the tactical proof style. He also generalised the existing tactic-machinery in the LCF systems, into a notation for expressing control algorithms for tactical theorem provers [4]. This work aims to clarify the nature of the basic elements from which a goal-directed proof assistant is built.
Lincoln Wallen is pursuing two separate aims. The first concerns proofs in resolution; he has found an efficient general strategy utilising unification and matings. Secondly, he is investigating the use of tactics in the PRL system of Constable at Cornell and taking it a stage further; this leads to a tactical calculus in which tactics are viewed not as subgoaling functions, as is normally done in LCF, but (more generally) as functions which manipulate proofs. This work, which also extends Schmidt's, will be documented soon.
Larry Paulson completed a large case study: the machine-assisted verification of the Unification algorithm [5]. This study, among others, has led him to make several improvements to the LCF implementation; the resulting Cambridge LCF running on VAX under Unix is now the best version. Several of his papers [6-8] are largely concerned with improvements. They involve changes to the underlying logic, which make proofs - in particular the derivation of a wide class of induction rules within the logic significantly easier; they also involve enhancements of the implementation, for example a more flexible tool for applying simplification (=rewriting) rules.
For a few years declarative (i.e. functional and logic) languages have been proposed as ways of solving the software crisis by concentrating on correctness rather than efficiency. Unfortunately such languages are far from current machine architecture and hence quite inefficient. This work addresses theoretical and practical issues in improving this.
The project set out to extend and enlarge upon the investigator's PhD thesis [3] on efficient applicative language implementation by developing
The program became extended by the inclusion of logic programming languages (e.g. Prolog)
The investigator is now working with the program methodology group at the address below who provide a very stimulating environment. Current work involves exploration of one aspect of Milner's polymorphic type system for HOPE-like languages [5] and the relationship of different models of currency (in particular CCS and Petri-nets [l]).
The work on type checking Prolog is immediately applicable, contact the authors [7] for source code. Given a research assistant much more of the work could be mechanised.
Current address/contact A. Mycroft, Inst. for Informationsbehandling, Chalmers TH, S-412 96 Gothenburg, Sweden.
SERCnet Address mycroft%chalmers@YKXA
uucp Address... !mcvax!enea!chalmers!mycroft
This project is for the development of a run-time diagnostics system for the Pascal language on the Perq. The implementation is derived from a similar system written at Keele for the GEC 4000 series machines.
The first goal is to implement the basic diagnostics system PRTDS (Pascal Run-Time Diagnostics System) on the Perq in the form it currently exists on GEC machines.
The second goal is to enhance PRTDS considerably to make full use of the Perq graphics capabilities and add sophisticated run-time program monitor/control features.
The equipment required for this project did not arrive at Keele until July 1983. The software required for the project consists of PNX 2.00 and several source programs concerned with the Pascal system on the Perq. These are still awaited. Such work as is possible without these tools has been fully explored in order to establish a first order representation of programs on the Perq.
Work is currently held up pending arrival of the required source programs. These are, however, expected in the very near future.
The system will provide an extremely powerful tool for program development and program run-time control. The features of data structured display are not restricted to such a diagnostics system and should easily find themselves to be of more general applicability. While the system is to be written for the Perq Pascal, the general principles and the majority of the product is neither specific to the Perq nor to Pascal.
For further information please contact the following people: Dr N H White, Dr K H Bennett or Mr G Haylett.
The Software Tools Centre provides tools for the Common Base, particularly tools that exploit the graphic facilities of a raster display with mouse.
The original philosophy of Unix was geared to glass teletypes. Currently the main way PNX (the Common Base Unix) exploits the PERQ screen is to allow several glass teletypes, each in a separate window. Only a few programs (eg. spy) actually use the mouse or any of the graphical facilities. The goal of the research is to fill this gap. This can only be done by a fundamental redesign of the way a program interacts with its user (eg. spy is not an adapted version of ed; instead it represents, as it must do, an entirely different conceptual model).
Not only will the Tools Centre develop its own tools, but will also validate tools from elsewhere.
The first two projects were concerned with infrastructure: support of communications and an archiving mechanism to aid recovery from disk corruption. In addition the Centre has produced a menu generator, which greatly facilitates the construction and layout of structured menus: a designer of a program simply draws the menu he wants, and the menu generator produces a file that can be incorporated in the program.
Current projects are
The whole purpose of the Centre is to produce tools that are applicable to the Common Base Community. Tools will be released by arrangement with the Rutherford Laboratory.
Information is available from: Software Tools Centre, The University, Canterbury, Kent, CT2 7NF. Tel: 0227-66822 contacts are John Bovey (ext 7690), Peter Simon (ext 7690) or Peter Brown (ext 636).
For the first part of his Special Replacement Fellowship, Professor Brown has chosen to work on the use of computers to display documentation.
Current documentation stored in the computer is usually a direct reproduction of manuals that were previously printed on paper. The goal of the research is to examine ways to exploit the medium of computer-stored documentation, so that documentation can adapt dynamically to the needs of the reader.
A prototype system, called doc, has been implemented. The user can evoke this to display the documentation that is of interest to him - this could range from a description of a text-editor to information about repairing a bicycle. Doc first displays a brief summary of its material, together with some icons. By selecting appropriate icons the user can adapt the documentation as he wishes. Examples are:
All these except the last, are implemented by simple mechanisms for expanding pre-written material - doc has no pretensions in AI.
In the current stage of the project, the emphasis is on writing real documentation, investigating how this can exploit the new dynamic mode of operation, and obtaining feed-back on how the prototype doc system can be developed.
In the longer term there is interest in the incorporation of graphics in the text, and in a database of documentation, with an intelligent front-end to aid traversal of the database.
The ideas are immediately applicable to any form of textual information stored on the computer.
The aim of this project is to demonstrate an approach that has been developed for enabling one to specify the behavioural properties a digital system is required to exhibit and to verify that a proposed implementation would in fact possess these properties.
The behavioural properties of a digital system are described in terms of the relationship which obtains between a11 possible sets of waveforms that could be present at the ports of the system. An important feature of this approach is that the specifications are partial, that is to say, they can allow for the imprecision and indeterminacy inherently associated with physical systems.
A proposed implementation of a behavioural specification is described in terms of the behavioural specifications of each of its component parts, together with a structural specification (eg. a circuit diagram, or mask layout) defining their structural interrelationship.
Verifying that a proposed implementation (eg. a ripple adder) is correct simply means checking that the desired behavioural properties (eg. that it adds binary numerals) are logically implied by the behavioural properties of its parts, and the interrelationships between these behaviours implied by the structural specification.
Waveforms in digital systems are represented by partial functions. A behavioural specification of a device is a relation on a tuple of waveforms. Such specifications form a lattice, and the usual lattice operators have useful properties.
Specification (both structural and behavioural) are described in a structured way by using combinators to express composition of various kinds. The underlying logic used is a higher-order, structured, sorted predicate logic. A sharp distinction is made between the formal logic and its possible interpretations for different digital technologies. Proofs are computationally checked for soundness using interactive proof checking techniques.
A set of specifications (in the form of axiomatic theories (for describing the properties of digital waveforms and simple digital devices was produced in 1981. A formal proof of the properties of simple digital circuits (eg. a S-R flipflop) was computationally verified using a prototype proof checker in 1983. Work is presently underway on software tools to assist with proof creation.
In digital systems where human life would be hazarded by incorrect designs, formal verification techniques can be usefully employed to enhance the certainty of a design being correct. In the longer term, the use of formal verification techniques is likely to provide a more economical approach to the design of digital systems, especially those realised using VLSI.
Please contact N Daeche or F K Hanna.
The following internal reports are availab1e:
To give the Fortran users within the Common Base access to the wide variety of currently available software covering both numerical software and software engineering tools. Where possible this software will be tuned to take advantage of the advanced graphical facilities of the PERQ.
The short term goal is to provide a solid foundation of useful Fortran software. Much of this software is written in Fortran 66 and can be improved by partially updating it to Fortran 77. In the longer term the aim is to build up this software base and, where possible, to enhance the software to use the facilities offered by the PERQ whilst minimising the effect this may have on the portability of the code.
A number of packages have already been implemented. These include LINPACK (for solving linear problems), AUGMENT (a tool to allow user defined extensions to Fortran), MP (a multiple precision arithmetic package) and the NBS Fortran 77 Analyser which allows the static and dynamic analysis of Fortran 77 programs. Several more are in the pipeline or awaiting licencing agreements to be finalised.
Work is in hand to enhance the NBS Fortran 77 Analyser to produce graphical rather than numerical results, to implement some of the large number of routines available from CACM and to investigate the use of a graphical version of NITPACK to produce a numerical help system. It is hoped that work will commence shortly, in association with NAG, on the PERQ implementation of TOOLPACK.
Upgrading software to use the new facilities in Fortran 77 can increase the reliability and portability of the code. It would be expected that much of the basic software would be valuable to industry and could now be more easily ported to other machines.
For further information please contact the following people: Dr T R Hopkins or Mr M A Broom.
In conventional programming languages, great emphasis is placed on instructing the machines how the required effect is to be computed. As a result, what the required effect is easily becomes obscured. Declarative programming stresses the what rather than the how. Its aim is to stay as close to the initial problem specification as possible, while still effectively describing a computational solution. The main formalisms currently available for declarative programming are the functional and logic programming language families, eg HOPE, KRC, LISPKIT, PROLOG, OBJ1. Over the past few years, it has become increasingly accepted that declarative programming will play a key role in addressing the software crisis, in allowing the effective exploitation of parallel architectures, and as an enabling technology for Intelligent Knowledge-Based Systems.
The aim of the project is the development of a theory of specification and implementation which is motivated by the requirements and practices of programming. The conventional approaches based on equational logic and the use of initially (ie. uniqueness concepts) have been abandoned as the former constrains one's ability to state properties concisely and the latter imposes proof obligations which are much more stringent than would seem to be required by the practice of programming. The language of first order logic was chosen both because of its greater flexibility and because it is already used in programming practice (eg. in verification systems, transformation systems). The project is also meant to address the problems of database specification and modelling. Databases are seen as prototypical examples of so-called object oriented views.
The short term aims of the project are to develop the foundations for a theory of data types, first put forward in [l], and to extend or modify the ideas therein to deal with database specification. Longer term goals include, on the one hand, the laying of foundations for theories of specification (what are the basic requirements for any theory of specification?), construction of support environments for the process of specification and implementation, and interfacing such a system with particular programming languages. On the other hand, theories of database specification will be augmented with the development of general purpose query and update languages and understanding of such concepts as transactions, views, incomplete information, deductive databases, etc.
The theory put forward in [l] was based on using first order logic with non-Tarskian semantics-only namable (finitely generated, reachable) values allowed. An object on a structure must be the value of a variable free term in the language of the structure. Thus, for example, non standard models of the natural members are disallowed. Furthermore, specifications are structured so that this structure constrains the way in which models are built. Thus, if X[Y] is a specification, then models of X[Y] are those which can be obtained by extending the models of Y, thus protecting components (Y) from disturbance by the addition of superstructure (X).
The first major achievement of this project was to give a simple proof theoretic account of these semantic constraints. This was done by extending first order logic to a fragment of Lwlw, a logic which al lows infinitary disjunctions and conjunctions and rules of inference with infinitary premises.
Based on this idea, straightforward accounts have been given for the standard notations of implementation, enrichment, parameterisation, and error handling. As to work on databases, an approach based on modal logics has been adopted [2]. A database state is seen as a first order theory (with the functions and predicates of the language as the application dependent, primitive query operations). Updates are seen as functions which map states to states (theories to theories) and are characterised in terms of modal operators in logics similar to programming logics (eg. Hoare logics, predicate transformer logics). A general purpose query language has been developed and its implementation in terms of dataflow computation is being explored [8].
Further refinement of the logic theory of specification is being undertaken in a number of areas. Of primary importance is the characterisation of Lwlw, which Imperial are using. Also of importance are questions concerning the machine support required for using such theory in practice. A number of properties of institutions suitable for specification have also been observed, and these ideas need to be characterised. On the database side, the work of the gen~ra1 purpose query language should be carried forward to provide a toy implementation. The nature of the general purpose update language also has to be explored. The important question here is: what combinators are useful in constructing updates from the primitive, application dependent ones? The problems of associating views with databases (or even constructing the database to support a pre-defined set of views), the appearance of partial or inconsistent information, and the nature of deductive information retrieval are also being explored by a research student associated with the project.
Program development is an ill understood process. Since the activities of specification and implementation (stepwise refinement) are universally acknowledged as being part of this process, it is hoped that the work outlined above will contribute to the construction of a well-founded program support environment based on powerful theorem proving tools used to support these activities. The claim that the particular theory used is better suited to software engineering practice should be tested by trials of these ideas in industrial scale projects. As to database work, the use of abstraction implicit in the ideas being put forward can be seen as a development analogous to the introduction of abstract data types into programming. This freed software engineers from thinking about data purely in terms of their representations in the programming language in use. Today's database designer has not yet been liberated and it is tied to thinking in terms of his adopted programming language - ie. the conventional data model (relational, hierarchical, etc.) he has chosen to use.
For further information, please contact the following: T.S.E. Maibaum, F. Golshani, M.R. Sadler, or S. Khosla.
The project aims to find an understanding of the process of functional testing which is widely advocated as beneficial. Experiments have shown that software developers are extremely bad at identifying software functions.
The first step is to identify where the problem lies. For this purpose, a number of software systems have been analysed in order to attempt identification and characterisation of those components which were missed by functional testing strategies. Currently, knowledge concerning the untested code has been acquired and preliminary analyses have been made. It has been demonstrated that most of the unexecuted code concerns housekeeping tasks introduced during the design stage. In the samples selected, such tasks are implemented by approximately half the total code.
At present detailed analyses of the major design methods are in hand with a view to developing an understanding of how testing strategy can be developed to ensure that all the components are thoroughly tested. The object is to increase the quality of the testing without unnecessarily constraining the designers. Currently, this work is closely co-ordinated with a related project with the European Research Office of the U.S. Army.
The achievements to date have been the development of the concept of seeded validatory functions and the organisation of confirmatory functions. The exploitation of these techniques with some specific design methods have also been completed.
The techniques developed will, in principle, be applicable over a wide range of software projects. Still to be evaluated and a potential limiting factor are the long term costs.
Contact Dr M A Hennell.
This project concerns the design of formal specification and methods for sequential, parallel and distributed systems. Formal methods for sequential systems are now quite well understood, however, such techniques do not readily transfer to parallel systems; successful techniques for parallel systems must be capable of specifying and documenting the interference between components.
Overall, this project aims to design, investigate and exercise a particular style of formal method suitable for parallel system development, ego a style which follows ideas presented [3]. One of the essential properties of the desired approach is that it supports truly hierarchical development. That is, specifications can be developed independently of each other. Short term goals were to investigate the use of temporal logic in specification and obtain methods based on such logics. Longer term goals are to combine relational and sequence logic approaches aiming to produce a method which is usable and understandable by the non mathematical systems designer.
In the last year, major advances have been made in the application of temporal logic in specification. By making simple changes to the underlying computational model for concurrent program execution, a compositional temporal logic specification method has been developed. Compositional proof systems for both shared variables and message based parallel languages are given in the paper [2J. The tempora1 logic specification technique has been successfully applied in a partial development of a packet switching network [l].
Currently the following areas are being tackled.
As distributed processing systems are becoming more important, it is necessary to possess formal development methods to avoid software crises. The results of this project are thus aimed to be applicable in most aspects of parallel and distributed system design and construction. For example, some aspects of the work should be of particular interest to the telecommunications industry.
Contact either Howard Barringer, Cliff Jones or Ruurd Kuiper.
A large VAX 11/750 computer running Berkeley UNIX 4.1 has been made available to the Manchester group in order that they may successfully pursue software technology research outlined on the Manchester pages. Because of the software which they intend to import, it is essential that the machine is VAX compatible. Additional hardware in the form of local area network (LAN) and wide area network (WAN) connections has been made availab1e.
To provide computer-based support for software engineers developing systems according to some rigorous method (eg. VDM [1]). Methods such as VDM, involve the notions of formal specification, systematic program refinement, and correctness arguments. The support system will be mounted on powerful, single-user, workstations.
The short term goals are to complete the specification of the VDM level; complete the initial version of our user-system interface (based upon the structure-editor paradigm) and generate a first-cut VDM support environment. The long term goals are to provide semi-automatic verification aids of both specification consistency and design correctness. Support for other developments will also be added.
By the end of 1983, the underlying database (MDB) had been specified, and implemented on the PERQ. MDB also runs on the other machines available to the project: Apollo Domain and VAX 11/750 both under Unix. Structuring methods had been formulated and included in the VDM state specification. Numerous structure-editor based systems had been studied and evaluated indepth. Following this evaluation and experimentation period the group's own user-system interface design began. Related work includes the development of the specification logic (3-valued), and the use of logic programming for the rapid prototyping of formal specifications. A paper was presented at the York workshop on Formal Specification and Verification, Easter 1983. A later version, contained in [2], is available from the authors.
Performance tests of MDB are underway. Virtual memory performance on the PERQ is of particular concern. The VDM state has been completely respecified to reflect the group's current thinking regarding the structuring of specification. The user-system interface generator is at the design/coding stage. The project is also supported in various ways by industrial contacts at ICL, STL and STC IDEC.
Currently, other research projects at Manchester are using MSB, By mid-1984, it is expected to have a support system that may be of experimental interest to other research groups and, possibly, the aforementioned commercial companies (for internal R&D only).
For further information please contact Mr I Cottam, Prof C Jones, Mr A Willis or Mr T Nipkow.
The aim of this project is to investigate the ways in which the Binary Relationship Model (BRM) can provide a suitable basis for representing visual and textual material in an integrated information system. The model will be evaluated both in its application to more complex intelligent knowledge based systems, and in its requirement for special purpose hardware.
The first stage of the project will be to design and build a simple integrated information system based on the binary relationship model. This will contain free text and structured information retrieval, document preparation aids and simple graphical processing. Further work will centre around incorporating user views and deductive capabilities into the system, as well as monitoring the systems processing requirements.
In the six months since the project started, Manchester have outlined the overall design of the system and implemented a preliminary version of the BRM. A number of research students have developed additional software to provide graphical interfaces, and bulk data input.
The software described above has been developed on a number of different machines, it is now being transferred to the PERQ that Manchester have just received. A preliminary implementation of user views is being implemented. Manchester are looking for suitable user applications on which to test the preliminary system. Manchester are cooperating closely with related groups in this department and at the University of Essex.
The results of this project would have obvious application to office automation, library and information services. It is too early to say how they could be best exploited, but Manchester have discussed the project with the British Library and the UK Atomic Energy Authority.
For further information contact Dr F N Teskey, Dept of Computer Science, University of Manchester, (SERCnet address - EFMA04@UMPA).
This project addresses the problem of supporting the task of program design for data processing applications. The aim is to specify a workbench which can be used to design, code, test, document and maintain data processing programs.
The main research goals are to formalise commercial program design sufficiently to define an automation process. These goals include:
By January 1984 much of the formalisation of JSP was complete. Many of these ideas have been communicated and confirmed through the vehicle of the Automated Aids Working Party which is a subgroup of the Jackson User Group. The automation of this formalisation is now underway. The definition of the user interface for diagram drawing, editing and program structure generation is almost complete and the interface for the subsequent stages of program design (eg. backtracking, elementary operation definition, condition definition and structure clashes) is underway. To date a full diagram editor, supported by a data dictionary representing the stored structures, is implemented on the PERQ under POS. The automation of correspondences is the next stage for implementation. More of the system would have been implemented but for the problems encountered trying to use Pascal under PNX. The group has now abandoned any thought of using PNX in the foreseeable future!
Further implementation is the short term aim together with the research required to formalise the many other aspects described above. A possible contact with industry is also currently being considered for the development of some ideas. Contact is retained with the Automated Aids Group to act as a sounding board for ideas.
The prototype is now availab1e to create and edit structure diagrams on the PERQ. In the near future hardcopy output will be available. Diagram drawing, which is merely the start of the research, is valuable to a large number of potential users of computers whether they be commercial users or not. The longer term applicability is aimed directly at Commerce. One Software House is already expressing great interest in the future product based upon the groups ideas.
The purpose of the projects is to develop computer tools to support the rigorous design of reliable software. This relies on the exploitation of mathematical formalism. The use of formal specification language makes it possible to record the design stages of a program's development and to prove its correctness. The use of an executable specification language enables software to be prototyped and large specifications to be debugged.
The long term goals of the projects are to implement the executable specification language OBJ; then design and implement as associated proof assistant for verifying the correctness of specifications and programs; and to investigate the role of formal specification in rigorous design methods. The immediate goals are to complete the implementation of OBJ, and to experiment with its use in the construction of software.
The UMIST Proof Checker, which is a prototype for the proof assistant, has been considerably enhanced on the ICL Perq. The current version allows the semi-automatic production of proofs for theorems about OBJ-like abstract data type specifications. A subset of OBJ has been implemented in Pascal under Perq/PNX and VAX/VMS. The subset is sufficiently powerful to be used in the development of practical and executable specifications. The UMIST Proof Checker and the initial OBJ implementation were demonstrated at the Royal Society Discussion Meeting on Mathematical Logic and Programming Languages held in February 1984.
The immediate task is to develop a Pascal implementation of full OBJ using the first generation of tools. The initial subset of OBJ has been found to be very powerful for developing formal and executable specifications. The use of one generation of tools in the construction of the next provides very valuable feedback about the functionality and performance of tools as well as refining ideas on design methods. Simultaneously with the implementation work on a preparatory study of proof techniques applicable to OBJ is being carried out.
OBJ is we1l suited to the task of specifying Ada programs since both languages are based on data abstraction. It is expected that the tools developed under the projects will form suitable foundations for a second generation APSE in which the power of OBJ could be exploited to prototype Ada programs.
Please contact Robin Gallimore or Derek Coleman.
ARPAnet - GALL@UCL-CS (Gallimore and Coleman)
The COBOL Language Enhancement Facility (CLEF) is a compiler-integrated syntax macro facility intended to provide users with the opportunity to specify the precise dialect(s) of COBOL which they require their compiler to support. It has been designed by a BCS working party, and a pilot implementation of CLEF has been the subject of a previous SERC-funded project at UMIST.
The goal of the current project is the incorporation of the CLEF facility into an existing ICL COBOL compiler. This involves:
On the compiler side, achievements to date have centred on the solution of design problems (eg. control of macro matching; generated-text scheduling and input file switching; and accommodation of out-of-order entries in the symbol table and intermediate text) associated with the incorporation of a powerful macro facility to severe space constraints. In addition, considerable progress has been made with the supporting software, and facilities now exist for the generation of the syntax table needed to drive the compiler.
The table-driven parser is being designed, together with the action processor and the scheduler which is responsible for handling generated text. Interfaces are being specified in close collaboration with ICL.
The attraction of CLEF lies in its potential capacity to upgrade existing compilers to support features such as structured programming, database etc.
For further information please contact the following people: Dr P J Layzell or Dr K M Wynne, Computation Department, UMIST, P.O. Box 88, Manchester, M60 lQD.
To investigate the extent to which the techniques used for debugging hardware might be applied to the debugging of software and vice versa, and to develop debugging tools which can be applied equally well to hardware and software. Following this, it is hoped to construct a system which will be able to monitor the progress of a computation by appropriate hardware and software instrumentation.
The PERQ is to be used to support a host system providing debugging facilities for a program running in a remote microprocessor system, at present a Motorola M6809 system. The debugger is display-oriented, and will provide for cross-assembly (or compilation) and linking of programs, and down-line loading to the target system. As the program runs, the system will allow the usual debugging activities such as breakpoint setting and clearing, starting and stopping the processor, single stepping, etc. The host system will maintain all the necessary data structures for the user to communicate with the target system in source language terms.
The PERQ is also to be used to provide a friendly interface to traditional hardware instrumentation such as a logic analyser. The host system will provide the necessary communication via the GPIB to control the actions of the instrument and to receive the results of the monitoring. This data will again be presented in a display-oriented fashion, making use of user-level representations of the target hardware held within the host.
The main aim of the project is to develop an environment in which microprocessor systems may be debugged where both the hardware and the software are being designed and constructed in parallel. New techniques in debugging will be developed out of existing hardware and software techniques, and a unified environment will be developed and built.
The software debugger is at an advanced stage of development, at least as far as the communication with the target is concerned. Cross-assembly is being provided at present, together with source level specification of debugging information, and consideration is also being given to the use of high-level languages. On the hardware side, investigations are in progress to decide upon and to purchase the most suitable instrument available for the needs of the project, and at the same time various hardware description languages are being assessed as candidates for the purpose of providing a representation of the target hardware within the host.
The project is aimed at the development and promulgation of an engineering method applicable to the construction of computer based systems. Much of the complexity, unreliability, and the cost of these systems can be accounted for by the late discovery (during programming) of errors introduced at the design stage. The imprecision inherent in informally-expressed system specifications allows major errors to go undetected at the stage in the system construction process where they can be most economically corrected. The investigators believe that the application of formal (mathematically-based) methods in the design and development process will play an important part in reducing the cost and increasing the quality of systems.
Important barriers to widespread acceptance of development methods based on formal principles still remain. The long term goal is to overcome some of the fear of applying formal methods still expressed by programmers and designers working in industry.
Medium term goals are threefold:
By late 1983 collaborative projects had been established, with industrial partners (ICL, IBM) and with the Distributed Computing project which is underway in the same laboratory. Both the ICL and IBM projects involve applying Z to the description of software products which have grown so large as to be difficult to comprehend by their development and maintenance teams. In each case the purpose of the collaboration has been to develop descriptions of the products which are sufficiently accurate as to capture the essence of the product's intended behaviour, but sufficiently abstract that a newcomer to the development team can be introduced to the product without being overwhelmed by the implementation details.
Staff on the Distributed Computing project have applied the proposed method from the very beginning of their work. Each of the services provided by their experimental local area network has been designed and documented using Z. For further details see [l].
An editing system to support the production of high quality multi-font documents, containing formal material and diagrams has been constructed on the PERQ. It has been made available to a few academic institutions and is also being exploited commercially.
Work on the proof checker for the Z logic was hampered by the delayed production of the Common Base Software. A type-checker and prototype proof checker for a large subset of the logic have been completed. these will be moved to the PERQ as soon as an ML implementation becomes available. Design of the specification-support database continues: experience gained in the construction of the above mentioned document editor has been an important source of inspiration in this area.
Improved Software Engineering methods are likely to be used in all branches of programming. If the goals of the project are met then an effective and well documented method of attacking one of the fundamental problems which beset the computer industry will become freely available in the public domain.
For further information please contact B Sufrin or C A R Hoare.
This project is for the development of an incremental compiler for Ada [1] with integrated software quality assurance functions. The starting points for the work are the York Ada compiler [2], the EMACS text editor [3], and Unix [4]. Its purpose is to develop tools which assist development of Ada projects from a base of software components.
The provision of suitable programming tools is essential for the proper exploitation of Ada. While incremental compilation for Ada is the long-term goal of the project, subsidiary goals can be listed as follows:
With incorporation of parts of the York Ada compiler beginning at an early stage in this list, it is hoped that this programme will provide a staged mechanism for achieving the main goal of the project, while developing a number of other tools of wider applicability.
By February 1984, the version control system and its associated tools was complete. It is called the Project Development Environment (PDE) [5]. A radical approach was adopted, of providing enhancements to the Unix file system at the system-interface level (not altering the Unix kernel). With these enhancements arbitrary file attributes and transparent version selection are supported. In the Ada context the file attribute enhancement is needed to implement the Common Ada Interface Set (CAIS) [6J] under Unix. This approach allows the new file system to exist alongside the usual Unix system, and provides for interoperability: moreover, all the usual Unix tools can be transported (usually without change) onto the new system.
The new file system is now in use to support the project development itself. With receipt of the first stage of the York Ada compiler, work has begun on the next two parts of the project. Preliminary packages exist for tags and syntax-assistance, and the syntax tree of the York compiler will now be incorporated into the editor. This will be done through development of tools to perform transformations on the York source, inserting the information required for the editor into the attributes of the syntax tree.
The enhanced file system and version control tools are of general applicability and will soon be ready for separate release. The relative ease with which CAIS could be supported from this work has caused some interest in the Ada community [7], and some reaction from Unix sites to the version control system is also expected.
For further information please contact the following people: Dr M K Crowe, D Mackay, G Ball, M Hughes or D G Jenkins.
The research associated with this grant started with a thorough survey of existing software engineering environments. The findings of the survey have been summarised in a 288 page report [1] which also outlines our proposed research leading to an effective workstation environment to support the development of flight simulators.
A six month slip was introduced into the work because of a move from one academic institution to another. Two Research Assistants are now in post the first started in February 1984 and the second started in August 1984. The work is now well underway and the frequency of meetings with the the collaborative industrial partner has increased to a regular pattern.
There are two major components of the research. Firstly the development of an integrated set of tools in a workstation environment. Secondly, to investigate the introduction of higher level abstractions in the transformation of conceptual design to an implemented system, including both hardware and software. The work is being carried out in conjunction with an industrial partner who has substantial experience in the development of large scale systems. The investigation must therefore take account of the existing environment when considering the framework of new tool developments. Accordingly the work to date has concentrated on gaining an understanding of the processes by which the flight simulator design engineer sets about the design of a simulator sub-system. Initially Plymouth have chosen to focus attention on the Flight/Aero subsystem which uses data supplied by the aircraft manufacturers to provide various inputs to the mathematical model executed by the simulator.
Plymouth have identified the need to provide a satisfactory interactive interface between the manufacturers data and the mathematical model used in the simulation. The form of this interface will allow the designer to manipulate the manufacturers data at the same time as monitoring the output from the execution of the mathematical model used in the simulation. A requirements specification of the toolset for these operations is currently being written.
Associated with this activity is the specification of the form of the workstation to support the interactive tools. Furthermore, the study of one particular subsystem of the existing simulator has been necessary to decide on ways to monitor the output relevant to the design activity. Part of this work is concerned with the identification of the real time information flows between the various components of the subsystem. This will1 form the basis of the work forming the second major focus of the project, namely the development of tools to model systems using higher abstractions above that of the code and hardware.
One of the most important techniques which Stirling use to overcome the complexity inherent in software design is to describe the software at an appropriate level of abstraction. If possible, the abstractions are captured in the software itself by the use of procedures and data types. Functional languages are particularly suited to this software design technique. Quite often however, Stirling make use of abstractions in our design which is not straightforward to represent in the programming language as procedures, or data types; for example, a particular set of concepts for describing parallel activity in an otherwise sequential language. At this point, the software designer would be inclined to develop a new programming language. The development of new programming languages is a more powerful tool than the mere development of libraries of procedures and data types. This project is to investigate the problem of making special purpose programming language design a tool in the software designers' toolkit.
The initial goal is to define a number of applications oriented programming language in order to validate our claim that such languages make the problem of software design simpler. These investigations will hopefully lead to an understanding of how to describe new programming languages in an economical way both for the potential user and for rapid implementation. A longer term goal will be the development of a system for software development, a principal tool in which will be the design of new programming language components particularly suited to the application in hand. The new components would be embedded in a suitable host language using techniques to be developed by the project.
To date Stirling have not made an appointment to the RA post on this project. Their investigations have concentrated on the embedding of set notation and logic functional language. The technique used, that of giving a denotational description of the concepts to be embedded, is an obvious one for functional languages. These embedded languages have been used to develop prototypes of a number of simple database programs, in particular a semantic network database, where the host language has been Lispkit [l]. More recently, a simple graphics application has been developed using the same technique, (based on earlier work [2]).
Stirling are investigating further applications areas for which special purpose languages, suitable for embedding in Lispkit, might be developed. An area of recent attention is a language for writing expert systems.
Another concern has been the use of such techniques with conventional languages such as Pascal. Stirling believe they now know how to embed out set notation in Pascal but need to do the experiments to validate this belief.
Stirling hope to exploit these ideas in a related project, jointly engaged in by Stirling and ICL. That projects goals are to investigate the applicability of functional and logic programming to the rapid development of business applications. It builds upon earlier work funded by both SERC and ICL.
For further information please contact Peter Henderson, Computing Science Department, University of Stirling, Stirling FK9 4LA. Tel: 0786 3171 ext 2759. Mail address: PHSCS@SGVA or DTSG01@RLGM
Anna is an annotational language which can be regarded as an extension of Ada. It has been designed almost as a wide spectrum language, to support all phases of the software development process. It includes facilities for formal specification as well as facilities for incorporating the assertions needed to carry out formal verification. A preliminary version of Anna has been superseded by the most recent March 1984 version.
The project is proceeding on three fronts. Anna is being exposed to a variety of applications. From the experience gained in these exercises an assessment is being made of the suitability of the language and of its likely acceptability. Finally issues associated with the implementation of Anna are being considered.
The objective of this project is to develop software to link computers running the Unix operating system to X25 based networks SERCnet (JANET) and the British Telecom PSS service.
The original goal was to link PDP-11 computers running 7th Edition Unix to the X25 networks. That goal was realised in 1982. The present aims are to offer support for sites which are running the York software and to improve further the reliability of the system. Longer term goals include support of more versions of Unix, integration of the Bristol JTMP software, implementation of ISO and other international standards as they become available, a high performance front end (for use with VAXes) and recoding of the front end software to make it compatible with non-DEC hardware.
The York Unix - X25 communications package has been implemented using either a FALCON SBC-11/21 or LSI-11/02 front end processor, providing X25 levels 1, 2 (LAPB) and 3, a Transport Service and Host X29/TS29, and a suite of user programs running on the host under Unix. The system can support up to 16 X25 logical channels which are designated (by the user) for use as incoming or outgoing X29/TS29 calls or FTP/MAIL streams.
The COMSYS X25 package, originally from UCL, has been used to provide levels 1, 2 and 3 X25 handling; the Yellow Book Transport Service, X29/TS29 handlers and a protocol to handle the data flow between the front end and Unix, have been developed at York. An additional device driver has been provided for the Unix kernel.
The user programs, all developed at York, include
There is full on-line documentation for these programs.
Release 1 was distributed late in 1982. Release 2.0, offering much enhanced user programs and increased reliability, was issued in September 1983.
Release 2.1, essentially a maintenance release but including support for BSD 4.1 for the first time and system documentation, is scheduled for distribution in mid-April 1984. A release to support PNX 2.0 will be forthcoming this summer, as will support for BSD 4.2. The University of York will be involved with ESPRIT EIES and it is expected that manpower will be provided for the project from this source by mid 1984.
About fifty computers have been licenced to run York software so far. Many of these rely of it to provide a network service for their users. Several commercial sites have taken out licences and some are negotiating sublicence rights for the software.
For further information, please contact S J Smith.
This project involves the development of an Ada compiler, to assess the Ada language and provide a compiler to further progress in Software Technology.
The project set out to evaluate both the Ada language, and the level of technology required to build its compiler. This has remained the long term goal; in the short term, the compiler is being updated to accept the full Ada language, prior to its validation.
Two versions of the compiler were made available to British academic institutions, in October 1982 and December 1983; both run under Berkeley Unix version 4.1. The performance of these versions is encouraging; both run at about 600 lines a minute and cope well with programs written in the appropriate subset of Ada. The second release of the compiler is being exploited commercially under agreement with the British Technology Group.
The compiler is being redesigned to incorporate new language features; in particular, generics and derived types. A major change to the middle of the compiler should result in appreciable performance improvements, in memory requirements and (it is hoped) in speed. The industrial collaborators SSL and SPL are making suitable changes in the compiler to host it on (and target it for) the VAX VMS system.
Both releases of the compiler so far, whilst only implementing subsets of the Ada language, have been usable for the construction of software products in Ada. As such, their scope for use and exploitation is very wide, and will broaden as the compiler becomes available on the commercial market.
For further information please contact either Prof I C Wand or Dr M D Harrison.