The programming language Modula is unusual in that it attempts to combine relatively conventional multiprocessing facilities with the references to the hardware of particular input/output device in one coherent language structure.
The language also contains a sophisticated module mechanism with a strict security system for visibility control across module boundaries. The overall form of the language is based upon Pascal.
A compiler for Modula has been written at York in BCPL to run on a PDP-11 computer running the UNIX operating system. Code can be generated for a target PDP-11 or LSI-11 with or without the Extended Instruction Set Option. The grant was awarded to enable York to develop, document and maintain the compiler, to make copies available to other workers and to enhance it in response to requests from users.
The second release of the compiler was made in March 1979 to about 60 users both in the UK and overseas. From reports received we know that the compiler is being used successfully in a wide variety of projects. Full details of the first year of the project are given in . The only language restriction remaining in the implementation is that all names must be declared before use.
Effort is now concentrated upon:
Collaborators at other Universities are preparing versions that run under RT-11 (Bath) and RSX-11M (Southampton). A code generator for the INTEL 8086 has been produced in Australia.
The main activity during this reporting period has been the completion, and initial distribution of the Release 3 compiler . The compiler was completed by the end of March 1980 and was first distributed at the UK UNIX User Group meeting at Heriot-Watt University on 31 March 1980. Since that time some 20 sites have updated to Release 3.
Of the new features of the R3 compiler (see  for complete details), perhaps the inclusion of secure separate compilation of modules is of most value and interest. The following extract, from , described this new facility.
In York Modula a program consists of one or more program-modules where a program-module is a (level 0) module followed by a ".". Define and Use lists at level 0 are meaningful. Defined objects are 'remembered' in a so-called Modula System File (msf) for later Use by other (not yet compiled) modules. Objects found in a level 0 Use list are retrieved from the specified msf.
Some new rules apply to objects which are exported from a program-module. If an object is exported whose type is not one of the built-in types or whose type name does not appear in the program-module level Use list then its type name must also be included in the Define list.
The scheme is a bottom-up one which assumes worst case re-compilation. That is, if a module a is compiled, followed by a module b, and then a is submitted for re-compilation b is assumed obsolete and must also be re-compiled even if no export/import relation holds between it and a.
Since most Device Modules only export objects, and are rarely modified, it is good practice to compile them into an msf before any ordinary, system specific, modules.
Program-module initialisation code is executed in the order of compilation. This is achieved by a compiler option which generates a module linkage segment.
A version of the compiler, which is compatible with the latest version of UNIX(V7) was released in August 1980 after considerable help with testing at the University of Newcastle upon Tyne. This version of the compiler will run on the modified version of V7 UNIX produced by Colin Prosser at Rutherford and Appleton Laboratories which runs on POP-11/40 and PDP-11/34 processors. This release of the compiler makes extensive use of V7 tools such as make.
The Modula Test Suite has also been greatly extended and has recently been modified to incorporate the new separate compilation facility.
The compiler has been completely rewritten in the proposed standard version of BCPL. This should enable an implementation of Modula to be generated for any computer that has a BCPL compiler which compiles the standard language; of course a code generator would have to be rewritten for any new target machine. It is hoped to implement a POP-11 cross compiler for Modula running on VAX under UNIX in the near future.
Major industrial collaborations during the project have been with Linotype-Paul Limited on the production of a Zilog Z80 and Texas TMS 9900 Modula compilers, and with Ford Aerospace on extensions to Modula (and the compiler) for use as the implementation language of an operating system security kernel (KSOS currently the largest known Modula program of approximately 30.000 lines).
Extremely valuable cooperation and feedback has also been provided by the Computer Laboratory of the University of Newcastle upon Tyne (Dr P. Lee) and Nottingham University Psychology Laboratory (Dr R. Henry).
One Research Assistant, Mr I. D. Collam, was appointed on 13 March 1978 for a period of three years. He left York at the end of September 1980.
1. J. Holden, I. C. Wand, An assessment of Modula, Software - Practice and Experience vol 10 pp 593-621 (1980).
2. I. C. Wand, Modula distribution and promulgation 1978, Department of Computer Science report number 17, University of York (1979).
3. I. D. Collam, Functional Specification of the Modula compiler release 2, Department of Computer Science report number 20, University of York (1979).
4. I. D. Collam, Functional Specification of the Modula Release 3, York Computer Science Report 33.
5. C. J. Nolan, N. P. Hawkins, I. C. Pyle, and I. C. Wand, Modula on the Intel 8080 microprocessor, York Computer Science Report 26.
This project is studying the design of a distributed operating system for an indefinite number of linked computers. The computers are assumed to be homogeneous. have no shared memory. to be connected via a ring and to have some form of local filestore. Furthermore the hardware is assumed to be available off the shelf. The distributed operating system is to have, as far as possible, the same functional characteristics and user interface as UNIX. The objective of this project is a feasibility study with comparisons involving complexity, cost and performance, of a centralised and alternative distributed version of the same functional system.
During the first year of the project, the work carried out has fallen under headings:
During the second year of the project York have concentrated upon the design of PULSE. Detailed reasons for adopting such an approach are given in  although briefly they are that such an approach has significant advantages over rival architectures in terms of simplicity of software structure, level of communications traffic, operational flexibility and so on.
The functions carried out by the operating system residing in each personal computer, together with the distributed file system. are crucial to the design. York have called the operating system PULSE as it provides a Personal UNIX-Like System Environment. However. the UNIX process model has been replaced in this design by the Ada tasking model. Furthermore it is intended to use Ada as the implementation language as they believe that its powers of abstraction and its strong typing are major advantages because of the complexity of programming in a distributed environment. It is expected that these factors will contribute greatly towards a well structured and reliable system.
The PULSE consists of a kernel, which minimally supports a number of Ada programs running in different address spaces, and an inter-program communication facility (IPC) for communication both within and across machines, together with a number of server programs (eg local file server and network server). One of the goals is that the resources provided by these server programs may, if efficiency dictates, be supplied by the operating system kernel in such a manner as to be transparent to user (client) programs.
The need for communication between client and server programs within the PULSE and across the network requires an IPC facility which will give consistent access to resources supplied by the local kernel, the local user and the remote user. It must also be compatible with the Ada view of task communication. Various recent UNIX related IPC designs have been studied during the initial investigation. These included the Arachne link , TRIX streams , UNIX Version 7 multiplexed files  and VAX Berkley UNIX ports .
Following these studies York have designed an IPC facility based on the concept of an Ada buffer task called a Medium . Programs communicate with each other by sending messages through Mediums. Messages are sent between machines via network servers. These servers provide the interface between the PULSE machines and the Cambridge ring. Any program wishing to communicate with a remote program will do so via the Network Servers running within each PULSE machine, although the use of the servers may be hidden from programs by incorporating a Medium naming convention into the Distributed File System or DFS .
It has been decided to base the Distributed File System design on the premise that users must be able to log in on any PULSE machine and access their files conveniently, and that they should be able to unplug their machines from the network and still access files in the same fashion as when they are connected to the network. This implies that each personal computer has its own local filing system which must duplicate some files held on a file server PULSE machine connected to the network.
This file server (in fact there could be more than one) is just another PULSE machine with a large amount of disk (and possibly magnetic tape) storage that provides a remote file service. Its operating system will be the same as that of user PULSE machines providing services to them via user level programs. The collection of file systems forms a distributed file system with unified naming, ownership and access mechanisms.
In parallel with the design work, York have experimented with the existing distributed operating system Arachne . This system, originally designed at the University of Wisconsin Madison, runs on a network of LSI 11/02s connected by medium speed lines. All the machines in this network are also connected to a PDP 11/40, running V6 UNIX, which acts as the file server.
The Arachne operating system has been ported onto the Terak microcomputer, replacing the PDP11/40 with a VAX, running V7 UNIX. Arachne has been modified so that each Terak behaves as a personal computer and has been extended so that local files may be held on the Terak floppy disc in UNIX V7 format enabling transparent access to local and remote files.
This experiment has given York an insight into some of the problems involved in constructing a distributed message based UNIX-like system.
Two research assistants were recruited to work on the project:
1. A. J. Wellings, G. A. Tomlinson, Distributed UNIX - Which Way?, Distributed UNIX Group, Memo 3, Department of Computer Science, University of York, April 1980.
2. CMU: Proposal for a Joint Effort in Personal Scientific Computing, Department of Computer Science, Carnegie-Mellon University, 23 August 1979.
3. A. J. Wellings, G. M. Tomlinson: ' A Personal UNIX-like Operating System Based upon Ada, Distributed UNIX Group memo 4, Department of Co Global Design Decisions, Distributed UNIX Group memo 5, Department of Computer Science, University of York, 7 July 1980.
5. R. F. Rashid: An Inter-Process Communications Facility for UNIX, Department of Computer Science, Carnegie-Mellon University CMU-CS-80-124, 11 July 1980.
6. H. Sturgis, J. Mitchell, J. Israel: Issues in the Design and Use of a Distributed File System, Xerox Corporation undated paper.
7. Carnegie-Mellon University Department of Computer Science, 'Research in Personal Computing at Carnegie-Mellon University', October 1980.
8. S. A. Ward and C. J. Terman, An Approach to Personal Computing, Laboratory for Computer Science, MIT, December 1979.
9. L. M. Casey and N. Shelness, A Domain Structure for Distributed Computer Systems, Proc of the Sixth Symposium on Operating System Principles, November 1977.
10. Department of Computer Science, Brown University, A Proposal for Industrial Collaboration on the Brown University Instructional Computing Environment, 1960.
11. D. Redell et al., Pilot: An Operating System for a Personal Computer, CACM Vol 23(2) pp 81-92, February 1980.
12. G. J. Antonella et al, SDS/NET - An Interactive Distributed Operating System, Proc of the COMPCON 80 Fall Distributed Computing Conference, pp 487-493, Washington DC, September 1980.
13. A. B. Barak and A. Shapip, UNIX with Satellite Processors, Software Practice and Experience, Vol 10, pp 383-392, 1980.
14. D. Nelson, Apollo Domain Architecture, Apollo Computer Inc, January 1981.
15. G. M. Tomlinson, I. C. Wand and A. J. Wellings, Distributed UNIX Project 1980, YCS.40, Department of Computer Science, University of York, 19 December 1980.
16. A. J. Wellings, Simulation of a Distributed UNIX System, Distributed Unix Group: Memo Thirteen, Department of Computer Science, University of York, February 1981.
17. R. Finkel and M. Solomon, The Arachne Kernel Version 1.2, Computer Science Department, University of Wisconsin, April 1980.
18. S. A. Ward, TRIX: A Network-oriented Operating System, Laboratory For Computer Science, MIT, December 1979.
19. K. Thompson and D. M. Ritchie, Unix Programmers Manual, Bell Laboratories, 1978. Seventh Edition.
20. R. F. Rashid, An Inter-Process Communication Facility for UNIX, CMU-CS-80124. Department of Computer Science, Carnegie-Mellon University, March 1980.
21. A. J. Wellings, Mediums - An Inter-Program Communication Facility for PULSE, Distributed Unix Group, University of York, March 1981.
22. A. J. Wellings, Inter-Program Communication for PULSE, Distributed Unix Group: Memo Ten, Department of Computer Science, University of York, December 1980.
23. G. M. Tomlinson, The PULSE-Net Distributed File System, Distributed Unix Group: Memo Twelve, Department of Computer Science, University of York, February 1981.
24. G. M. Tomlinson, Considerations in the Design of the PULSE-Net Distributed File System, Distributed Unix Group: Memo Eight, Department of Computer Science, University of York, October 1980.
25. M. H. Solomon and R. A. Finkel, The Roscoe Distributed Operating System, Proc of the Seventh SIGOPS Symposium, December 1979.
There is a trend to decentralise computing resources. As a result, several research programmes have been set up to investigate the possibility of adapting existing systems to a distributed role.
The origins of the present project lie in an earlier investigation  into the feasibility of distributing the popular and successful UNIX operating system among several processors. Several topologies were considered: a network of personal computers each with its own terminal and disk was finally chosen as the most suitable; the communication subsystem is used to share costly resources. This design provides the basis for the present project.
In the earlier project the original goal of distributing UNIX proper was altered to that of building an operating system which retains some of the features of UNIX. There is to be a hierarchical file system, whose structure and operation is similar to UNIX. However, the process model adopted is entirely different, as it is based on the Ada task.
The machines have a client-server relationship to each other. Each machine on the network will run a common kernel: roles are determined merely by the programs run on top of this kernel. For example, a machine may act as a printer server because it happens to have a device attached; it acts as a client to a file server in that it may request files to print.
There are two major requirements for the projected system. One is that each machine should be capable of running stand-alone without logical or physical connection to the network. The other is that the filing system should appear the same, wherever a user happens to be; although personal computers have been specified, users will be able to use machines other than their own, albeit with some loss of privilege. The personal computers are to be connected together by a high-speed local area network, in this case the Cambridge Ring .
The prime reasons for the use of personal computers are that the software structure is simple, the communication load is kept low, and the general operation of the system is kept reasonably flexible.
Ada has been chosen as the language around which the implementation is to be built. It was decided early on that the problems of developing a distributed system would require a language that provided a good degree of abstraction and type control, and since other projects in the department are closely concerned with the use and implementation of the Ada language , it seemed the obvious choice. An important result of using Ada is that all inter-program communication is achieved through task rendezvous: kernel objects called Mediums act as buffer tasks, which accept and forward messages.
The whole system has been named PULSE. Each machine in the network must run at least a kernel and a file server. The kernel supports the requirements of the Ada language, as well as allowing several programs to run concurrently, and to intercommunicate. It also provides basic management facilities for the allocation and manipulation of new program images. Certain Ada exceptions may be raised remotely by and in programs, and the kernel is responsible for their proper forwarding and responses.
The hierarchical filesystem is implemented by an instance of a file server program running on each PULSE machine. This program is written in Ada, and makes full use of the language's tasking facilities. Each server is responsible for all access to, and management of, files on its machine. This includes the loading of programs, and the association of particular IPC channels with file names.
In order to improve the reliability and availability of file manipulation, a scheme of file replication has been adopted. A particular file may exist at least as a master: however, additional, read-only copies known as duplicates may be created by users. The choice of which copy of a file depends, for instance, on whether updating is necessary, whether the master is younger than its duplicates, or whether a particular machine, and hence its disk, is available or not. A full description is given by Tomlinson et al .
A basic set of software tools has been written in Ada, in order to test the current system. These include a simple shell which allows programs to run in both foreground and background.
Work is proceeding at the moment on the automatic updating of duplicate files once their master has been changed. In particular, effort is directed to the ability of a user to set or inhibit the amount of searching performed by the file system. The search map consists of four areas: they are local duplicates, local masters, remote masters, and remote duplicates. The first three categories may be selected, but the last still requires development.
Some experiments are in progress, investigating distributed user programs. For example, no provision is made at a low level for loading programs remotely, so a suite of programs is being developed which can implement this. The servers in this suite associate IPC capabilities with filenames, which in turn are accessed by clients. Another experimental program is a simple mail server, which requires senders to update a common destination file, and then to inform the recipient by means of a named IPC capability.
The project investigator is Dr I C Wand, and the Research Assistants are D Keeffe, G M Tomlinson, and A J Wellings
1. A. J. Wellings, I. C. Wand and G. M. Tomlinson, Distributed UNIX Project 1981, YCS.47(1981), Department of Computer Science, University of York (21 December 1981).
2. M. V. Wilkes and D. J. Wheeler, The Cambridge Digital Communication Ring, Local Area Communications Networks Symposium. Mitre Corp. and National Bureau of Standards, Boston (May 1979).
3. J. A. Murdie, Functional Specification of the Release 0 Ada Workshop Compiler, YCS.54, Department of Computer Science, University of York, (October 1982).
4. G. M. Tomlinson, D. Keefe. I. C. Wand and A. J. Wellings, The PULSE Distributed Filesystem, Software Practice and Experience 15, 11 (Nov 1985).
The objective of this project, which is funded through the EMR (q.v.) Contract mechanism. is to develop software to link the Unix systems of the DCS research groups via SERCnet or the British Telecom's PSS Network to facilitate more direct cooperation and communication and a greater sharing of the resources.
The York UNIX-X25 communications package has been implemented as follows  : a FALCON SBC-ll/21 or LSI-ll/02 front-end processor, providing X25 levels 1, 2 (LAPB) and 3, a Transport Service and Host X29; and a suite of user programs running in Unix. The system supports 16 X25 logical channels which are designated (by the user) for use as incoming or outgoing X29/TS29 calls of FTP/MAIL streams.
York have used the COMSYS X25 package written at UCL, to avoid duplication of effort, and have implemented Yellow Book Transport Service and Host X29/TS29 and a protocol to handle data flow between the front-end and Unix. An additional device driver has been provided for Unix.
The user programs include
There is a full on-line documentation for these programs.
Release 1 was distributed late 1982. Release 2 is scheduled for September 1983. It offers much enhanced user programs and greater reliability of the front-end. Release 2.1, which supports the PERQ running PNX, is undergoing field tests and will be available by October 1983.
Future plans include support for the other versions of Unix. integration of the Bristol JTMP software, implementation of ISO transport layers, a high performance front-end software in C (so it can run in non-DEC hardware).
1. K. S. Ruttle and I. C. Wand, X25-UNIX: Memo Nine. Design Proposal, Department of Computer Science, University of York. March 1980.
2. K. S. Ruttle, UNIX-X25 Communications: Release 2 Product Description, Department of Computer Science. University of York, September 1983.