Jump Over Left Menu
DCS Projects: University of East Anglia
DR M R SLEEP
INSTRUCTION SETS FOR DATAFLOW ARCHITECTURES: A COMPARATIVE STUDY
Oct 78 - Sept 79
The main aim of this 1 year project was the investigation of dataflow architecture at the instruction level. Funds were provided to facilitate communication with other workers in the field, and a terminal and modem were loaned to the proposer to enable the use of software at remote sites. No staff support was requested for this familiarisation project.
The travel budget allowed the proposer to visit a significant sample of dataflow and related workers, including Arvind at MIT, Gurd et al at Manchester, the Toulouse workshop, Osmon et al at Westfleld, Treleaven at Newcastle and Turner at Kent.
These visits gave the proposer a much clearer picture of the state of Dataflow research in late '79. Just two interesting machines were actually working (the Texas Instruments testbed and the LAU machine at Toulouse) neither supported recursion. The MIT group, who under Dennis had pioneered the dataflow approach, had no hardware plans and were still discussing ways of implementing recursion. The most convincing plans for a hardware prototype supporting this feature emerged from Gurd et al at Manchester who gave the proposer free access to the soft version (compiler and simulator) of the (now hard) Manchester Dataflow machine.
Ronan Sleep speaking at DCS Conference, University of Sussex, 1984
Earlier in '79, Turner  had published a paper showing how applicative languages could be compiled into a variable-free code using combining functional constants called combinators. This paper introduced the proposer to the power available in a true functional language, and he began to consider a possible connection between dataflow and combinatory code. After visiting Turner at Kent, he moved a version of Turner's language SASL to the Newcastle PDP-11, and - accessing it via the loaned terminal and modem - quickly recognised the advantages of a true functional approach, which supports infinitary objects via demand driven evaluations.
Although demand-drive is the antithesis of the data-drive approach. it turned out that real dataflow programs need to simulate demand-drive (via trigger tokens), most obviously for conditionals in which one arm must surely be discarded. This point - obvious to those well versed in the lambda calculus - means that an otherwise data-driven regime must, if termination is to be guaranteed whenever possible, operate against a demand-driven backcloth. Wadge of Warwick presented a particularly nice example of how the single assignment rule of dataflow can give headaches to a compiler which wishes to avoid the excesses inherent in a purely data-driven evaluator.
The main achievement of this project was a recognition that a much sounder theoretical background than the single assignment principle would be needed if dataflow architectures were to fully support applicative (zero assignment!) languages. An important side effect of the proposers work has been to contribute to the growing awareness of the relation between dataflow and the lambda calculus model of computation. Summaries of the proposer's conclusions are available in refs  and .
The helpful co-operation of Treleaven, Gurd, Arvind and the DCS co-ordinator at the time are fully acknowledged.
1. D. A. Turner, A New Implementation Technique for Applicative Languages, Software Practice and Experience, V9, pp 31-49, 1979.
M. R. Sleep, Applicative Languages, Dataftow, and Pure Combinatory Code, in VLSI: New Architectural Horizons, (Spring COMPCON 80), IEEE 20th Spring Computer Conference Digest. 1980.
3. M. R. Sleep, Instruction Sets for Dataflow Architectures, University of East Anglia report CS/79/017/E, 1979.
DR F W BURTON and DR M R SLEEP
DISTRIBUTED EVALUATION OF APPLICATIVE PROGRAMS ON A HIGHLY INTERCONNECTED NETWORK
Aug 81 - July 83
Background and Nature of Research
Conventional programming languages are machine-oriented, requiring the programmer to tell the computer in considerable detail exactly what it must do to compute the desired result. Declarative languages such as SASL and PROLOG in principle allow the programmer to specify what is to be computed without going into any great detail about how the machine should do this.
This makes life easier for the programmer, but much harder for the implementer, who now has to work out the how from the what. The present situation appears to be that whilst declarative languages can lead to dramatic improvements in programmer productivity. current implementations are not well suited to high-performance applications.
Because evaluating expressions in a SASL-like language does not give rise to side-effects, sub-expressions may be evaluated in parallel. This suggests buying speed for such languages by employing large-scale hookups of single chip computers to perform parallel evaluation of sub-expressions. Programs written using the divide and conquer heuristic are particularly susceptible to this approach. giving potentially enormous parallelism. The goal of this project is to demonstrate, using simulation techniques, that a near linear speedup for divide and conquer algorithms is possible.
Early ideas for achieving this goal have been published at various international conferences  and widely presented in seminar form. A sophisticated simulation vehicle with colour graphic instrumentation has been developed. and early (encouraging) simulation results presented at the MIT/ACM conference . The simulator has been extended to simulate a distributed combinator reduction machine. A method for avoiding the quadratic growth in size of the code has been found . Experiments with the programmable simulator confirm the main conjecture for suitable problems  although machine limitations restricted the number of processors to 24. An annotation approach to increasing user control has been developed . In parallel with the architecture work, a new process-oriented formalism for modelling distributed evaluation mechanisms has been developed and the results are being presented in .
Work in Hand
Because it is already clear that the early approach can be made to work for some problems, considerable effort is being devoted to extending the basic results.
The departure of Dr F. W. Burton for Colorado has led to the early closure of this grant, but related investigations continue under a new DCS grant awarded to Dr Sleep.
Tree algorithms are well-suited to the present architecture, which in principle at least can be constructed directly from emerging standard VLSI components.
M. R. Sleep, Applicative Languages, Dataflow and Pure Combinatory Code, IEEE COMPCON80 Digest, San Francisco, February 1980.
M. R. Sleep and F. W. Burton, Towards a Zero Assignment Parallel Processor, Proceedings 2nd International Conference on Distributed Computing Systems. Paris, April 1981.
M. R. Sleep and F. W. Burton, Communication in a Distributed Implementation of an Applicative Language, Proceedings 6th ACM ICS on Systems Architecture, London, April 1981.
F. W. Burton and M. R. Sleep, The Zero Assignment Parallel Processor Project, Symposium on Functional Languages and Computer Architectures, Lab. for Programming Methodology, Chalmers University, Goteborg, July 1981.
Keller and M. R. Sleep, Applicative Caching, Proceedings ACM/MIT conference on Functional Programming Languages and Computer Architectures. Portsmouth, New Hampshire, October 1981.
F. W. Burton and M. R. Sleep, Executing Functional Programs on a Virtual Tree of Processors, Proceedings ACM/MIT conference on Functional Programming Languages and Computer Architectures. Portsmouth, New Hampshire, October 1981.
R. Kennaway and M. R. Sleep, Expressions as Processes, Proceedings ACM symposium on LISP and Functional Programming, Pittsburgh, August 1982.
R. Kennaway and M. R. Sleep, Parallel Implementation of Functional Languages, Proceedings 1982 International Conference on Parallel Processing, Ohio, August 1982.
F. W. Burton, A Linear Space Translation of Functional Programs to Combinators, Information Processing Letters, Vol 14, No 5, pp 201-204, July 1982.
F. W. Burton, An Efficient Functional Implementation of FIFO Queues, Information Processing Letters. Vol 14, No 5, pp 205-206.
F. W. Burton and M. M. Huntbach, Virtual Tree Machines, IEEE Transactions on Computers (March 1984), pp. 278--280.
F.W.Burton, Annotations to Control Parallelism and Reduction Order in the Distributed Evaluation of Functional Programs, ACM Transactions on Programming Languages and Systems (TOPLAS), Volume 6 , Issue 2 (April 1984)
F.W.Burton, A Distributed Evaluation Mechanism for Functional Programming, Proceedings IEEE Med. Electrotechnical Conf, Athens, May 1983.
J. R. Kennaway and M. R. Sleep, Applicative Objects as Processes, Proceedings 3rd International Conference on Distributed Computing Systems, Miami, October 1982.
M. R. Sleep and S. Holstrom, A Short Note Concerning the Lazy Reduction Rules for Append, Software Practice and Experience, December 1982.
J. R. Kennaway and M. R. Sleep, Novel Architectures for Declarative Languages, Software and Microsystems, Vol 2, No 3, June 1983.
DR M R SLEEP
THE POTENTIAL OF PURE COMBINATORY CODE FOR DISTRIBUTED COMPUTING
Jan 80 - Dec 82
Background and Nature of Research
Conventional computers do one thing at a time to elements of a large central memory, and most conventional programming languages reflect this fact: for example when writing a FORTRAN program which manipulates a large array of data, the basic assumption of every programmer and analyst is that examining or changing data elements involves roughly equal costs, where ever the elements are places in memory.
Because fifth generation diffusion architectures dynamically decompose a larger computation into a number of sub-computations which are distributed and independently executed on physically distinct VLSI components, a fundamental assumption of conventional computing fails. A new model of computation, in which account is taken of distributed costs, is required.
Because variable names must be looked up in an environment to give them a meaning, every use of a variable in a program implies communication costs which are (cleverly!) hidden in conventional implementations. Turner  reports the practicability of variable-free code, in which the detailed distribution paths are encoded using combinators. The goal of this project was to investigate the potential of such code for distributed implementations.
This project started with a strong belief that combinatory code was of fundamental importance for distributed computing: early ideas are presented in . It ends with a surprisingly simple picture of how combinators work. which shows clearly that key combinators are really just data-switching elements which can be used to avoid sending data to places it is not required. This director string model  emerged from work with Dr J R Kennaway who showed that combinator translations can in the worst case grow quadratically with the size of the original program . A compile time balancing scheme was suggested, and details appeared in a paper by Dr F W Burton  which lead to a linear space translation. A simpler count-based encoding of director strings has been shown to yield a near linear translation  .
Direct use of these ideas could lead to and efficient parallel reduction machine based on combinatory code.
1. D.A.Turner. A New Implementation Technique for Applicative Languages, Software Practice and Experience. January 1979.
1. M. R. Sleep, Applicative Languages, Dataflow and Pure Combinatory Code, COMPCON 80 (Spring) digest. San Francisco, February 1980.
2. J. R. Kennaway and M. R. Sleep, Director Strings as Combinators, ACM Transactions on Programming Languages and Systems (TOPLAS), Volume 10 , Issue 4 (October 1988).
3. J. R. Kennaway, The Complexity of a Translation of Lambda-calculus to Combinators, Computer Studies Centre, University of East Anglia, June 1982 .
4. F. W. Burton, A Linear Space Translation of Functional Programs to Turner Combinators, Information Processing Letters July 1982.
5. J. R. Kennaway and M. R. Sleep, The Efficiency of Counting Director Strings, UEA CSD, copies available from contact.
DR M R SLEEP
PROGRAMMING AN EXTENSIBLE ARRAY OF TRANSPUTERS USING A NATURALLY EXTENDED FUNCTIONAL NOTATION
Oct 83 - Sept 86
Background and Nature of Research
Declarative programming using functional and logic notations is proving an increasingly serious contribution to Software Engineering. especially when inherent parallelism is exploited to yield usable performance. However. existing notations considerably restrict the users ability to express control information.
Declarative code which runs with reasonable speed tends to make heavy use of knowledge of the underlying operational semantics of a particular implementation (eg. normal order for functional languages. depth first leftmost with cut for Prolog). The resulting code tends to be heavily obscured by such programming techniques.
The intention is to develop a natural extension of a functional language which accommodates process notations capable of expressing detailed control information. Early work in this direction is described in . The method will be to work towards a soundly based formalism with clean semantics. and only then consider distributed implementation problems in detail. Nevertheless, the need for workable operational semantics will inform the design process throughout.
Strong use of the correspondence principle will be made: in particular, the notation we seek should contain quite naturally the usual lambda calculus.
This project has just started but it does so with the benefit of some early work funded by a previous grant. This has led to a new prototype notation (called DyNe) which is a significant advance over the earlier LNET  because there is exactly one central mechanism for abstraction.
Work in Hand
Various strands from the earlier project need to be written up, including director strings and a new implementation technique for functional languages. Because of the correspondence principle, we expect such work to inform the main task of developing DyNe.
DyNe is seen as a very high level systems programming language for transputer networks, which contains the usual lambda calculus as a sub language.
1. J. R. Kennaway and M. R.S leep. Expressions as Processes, ACM Symposium on LISP and Functional Programming. Piltsburg, August 1982.
DR R DOWSING
THE SPECIFICATION AND IMPLEMENTATION OF PROGRAMS ON A MULTIPROCESSOR
April 80 - Sept 83
Background and Nature of Research
Programming distributed computing systems is different in nature as well as in kind as compared with single processor von Neumann machines. The differences which are addressed in this project, concern the way in which programs are designed for a multiprocessor or distributed computer system and the level at which algorithms need to be described for efficient implementation on a number of different architectures.
In order to implement a design on a multiprocessor or distributed system it has to be partitioned into a number of parts and these parts allocated to physical units in the system. Exactly how this is done depends on the nature of the algorithm and the target system architecture. This research is aimed at producing algorithm descriptions which may be mapped onto a range of architectures.
Originally the aims of this project were to investigate the properties required of an operating system to run on a multiprocessor system, such as CYBA-M , to identify the hardware support necessary for such systems. Concurrently with this it was intended to study the design and specification of a number of programs on a multiprocessor to understand the design methodology and design aids required for distributed systems.
Due to a number of problems. including the move of the grant holder to another university and difficulty in recruiting personnel. it has proved impossible to tackle both of these tasks and the decision was taken to concentrate on the second objective, the design and specification of programs for distributed systems.
The aim of the project is to produce a distributed version of Path Pascal  on a Cambridge Ring based collection of multi-microcomputer systems and to measure the performance of a range of examples in this language. A subsidiary aim is to discover weaknesses in the language with a view to proposing modifications or a new language.
As mentioned previously there have been problems in appointing staff and progress has been relatively slow. However. for the present calendar year work has progressed more rapidly, a number of internal reports have been produced and a number of seminars presented on the work. A new interpreter for Path Pascal has been designed and implemented requiring much less space than the previous version. This version runs under the IDRIS operating system. A study of a number of different multiprocessor algorithms for an assembler coded in Path Pascal has been completed including performance measurements on a single processor simulator. Some papers on this study are in preparation.
Work in Hand
The new interpreter system has to be ported to a multiprocessor Intel 8085 system. This is straightforward but time consuming. A paper study is being performed on a language based on path expressions suitable for implementation on a distributed system, for which Path Pascal is not suitable. Time will preclude and implementation of this language.
Other than the research team, the investigator and the two research assistants, a research student is investigating languages for describing the architecture of distributed computing systems and a research assistant on a contract funded by British Telecoms is investigating the design and specification requirements of telecommunication systems.
The applicability of the project is demonstrated by the interest of British Telecoms and their funding of a research contract.
1. E.L.Dagless. A Multimicroprocessor - CYBA-M, Proceedings of IFIP, pages 843-9 (1977).
2. R.H.Campbell and R.B.Kolstad, Path Expression in Pascal. Dept of Computer Science report, University of Illinois (1979).