Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ OverviewEngineering Board VisitCentral SystemFORTRAN AssessmentMuxworthy August 1976User InterfaceUtilitiesOps SysRoberts March 1979Tree-Meta on the ICL 1900Trip June 1976Trip October 1976Trip August 1977Trip August 1978Trip August 1979Trip August 1981Trip August 1981GKS January 1982Trip March 1982Trip March 1983Trip July 1983NATO Dec 1983CSP/Recursion 1984
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDLiteratureReports
ACDLiteratureReports
ACL ACD C&A INF CCD CISD Archives
Further reading

OverviewEngineering Board VisitCentral SystemFORTRAN AssessmentMuxworthy August 1976User InterfaceUtilitiesOps SysRoberts March 1979Tree-Meta on the ICL 1900Trip June 1976Trip October 1976Trip August 1977Trip August 1978Trip August 1979Trip August 1981Trip August 1981GKS January 1982Trip March 1982Trip March 1983Trip July 1983NATO Dec 1983CSP/Recursion 1984

Structure of Operating Systems

C J Pavelin

14.06.1976

Engineering Paper 48

This is an attempt to give details of the structure or the operating systems on machines proposed for the central Interactive Facility. (Details of facilities, user interface, algorithms etc, form the subjects of other papers.)

Documentation in this area is even more sparse than other areas, and the paper is based on such as is available and on hearsay.

The level of detail and the topics covered varies from one system to another. (In the case of UNIVAC, for example, it is scarcely more than an introduction to their own peculiar jargon.) No attempt at comparison is made. The systems are:

BURROUGHS    MCP
CDC          NOS
DEC          TOPS 10
UNIVAC       EXEC

1. BURROUGHS (MCP)

The operating system (Master Control Program) and all the jobs form a multitasking ALGOL machine. The operating system procedures form the outermost block of every process in the system. Thus Display Register zero (D0) always points at the table of descriptors of all the operating system segments. These descriptors are kept in the stack trunk (level 0 stack). A descriptor for the stack vector and an array of pointers to each stack in the system, is also kept here.

The level 1 stack of a job contains the descriptors of its procedures; any procedure call must access the descriptor at this level. Jobs may share all their code (sharing the level 1 stack) but not individual segments other than those at level 0. The actual working stack of a user job is thus at level 2 (or higher if the process spawns other processes).

D4 D3 D2 D1 D0 ACTIVE STACK OF JOB PROCEDURE STACK STACK TRUNK STACK VECTOR MCP PROCEDURE DESCRIPTORS DESCRIPTORS OF STACKS

MCP

The level 0 and 1 stacks are not typical in that they do not contain working data, etc.

A process is defined by its stack and stack front, the current Local Name Base (from which the Display Registers can be set up) and bits representing processor state etc. For an inactive process all these are stored in the first word of the stack itself and thus all the CPU requires to restart a process is its stack number (an index on the stack vector). Processes are queued through the second word of their stacks. Thus there is a ready queue of processes for the processor(s), that at the front being highest priority. There are also various wait queues and an event system.

There is no documentation on what independent processes there are within the operating system (although there is certainly a SWAPPER responsible for the swap-area). There is an implication that there are very few, most functions being formed in the user process with interlocks on shared data. Interrupts internal and external are dealt with on the stack of the interrupted process exactly like a forced procedure call. If the interrupt routine wishes to change process it issues a Move Stack operation which stores its own state as above, loads up that for another stack and exits.

Job control

Job control (Work Flow Language) is pre-coupled and thus running a job is equivalent to running a WFL program. When the job control loads a user program, the WFL program starts another process, creating a new stack, although the WFL process and the user program do not run together. It is not clear whether the new process is spawned at a higher level (sharing the D1 stack with the WFL program) or is created as totally separate.

Time-sharing appears to be controlled by a basic process which sets up a CANDE process for each terminal, to analyse terminal commands. Again, loading a program at a terminal causes further processes to be set up. CANDE commands and the WFL language are not compatible.

2. CDC CYBER SERIES (NOS)

Most operating system functions are performed by processes in the PPU. PPO always contains the system monitor - the overall controlling process. Most of the others load themselves with overlay code from the disc as necessary. Each PPU contains a nucleus which, when the PPU is idle, loops round examining a word in the PP communications area in central memory. If this contains the identifier of a PP program, the PPU will load and obey this, using input parameters in the communications area and leaving output parameters there. The execution of this function may itself involve a set of overlays. The following functions are covered by the PPUs.

Names of the PPU subsystems include:

There are also certain processes (known as sub-systems) in the central memory whose main function appears to be to create buffers of information to be passed to appropriate user programs. The subsystems, all of which have counterparts in the PPU as well, are:

Other system functions in central memory are all performed by CPUMTR. This seems to be an extension of the PPU0 system monitor but it is not clear who does what. The CPUNTR, for example, transfers buffers from a subsystem to a user program.

Memory

The lower part of central memory consists of tables and CPUMTR code. This is all locked down (although buffer sizes can be varied when the system is loaded). Each swapped-in program is described by a control point in the central memory resident part. This contains the exchange package area - base address, field length and register dumps of the program plus all other administrative information (a swapped-in program is said to be allocated to a control point). This is referenced by the exchange jump instruction (obeyed by PPU or CPU) in order to switch processes. User programs are swappable, sub-systems normally not. Subsystems have certain privileges in the system requests they can make.

System calls

A system request from a program is achieved by setting a marker in the job communication area (the first 65 words of each program). This can be followed immediately by an exchange jump to CPUMTR or the program can continue processing until the exchange jump is performed on a time basis by the PP0 monitor. Either way, the request is eventually noticed and dealt with by CPUMTR or a PPU. In the latter case processing of the program may well continue while the request is being executed.

Job control

A PPU program does analysis of basic job control commands. However, TELEX and BATCHIO do their own analysis and commands to these are not necessarily compatible.

The word sub-system is also used in connection with TELEX: the user declares a current subsystem which determines the interpretation of some TELEX commands (eg which compiler to use). It appears to be simply a flag to TELEX in this context and has no system-wide significance.

3. DEC 10 (TOPS 10)

teletype SPOOLERS ETC BATCH CONTROL DAEMON COMMAND DECODER SWAPPER SCHEDULER DEVICE HANDLERS SCANNER SERVICE USER PROCESS UUO HANDLER monitor interface cyclic processes

MCP

The operating system consists of:

The monitor is a set of processes shown in the box above, plus the code to deal with monitor calls (Unimplemented User Operations) in programs. Code and tables required by the monitor are entirely in core. Thus, table space is compiled in for the maximum number of jobs to be accepted, the maximum number of files open, and so on. Each job is given a number (between 1 and the maximum number) and this is used to index tables which give its state, page table, etc. Within the job table, entries are chained into various queues - ready to run, i/o wait, etc. Most of monitor is obeyed in supervisor mode; interrupts and i/o are handled in kernel mode.

The handling of a monitor call can be regarded as part of the user process (although there is some change in address space in supervisor mode). If necessary the UUO handler will pass information to device handlers etc, before returning to user mode.

The command decoder, swapper and scheduler are processes (known as cyclic routines) which are run at each system clock interrupt.

Job control

The command decoder at each interrupt will read a system command, if there is one, from each terminal (via scanner service), analyse it and in most cases cause an appropriate system program to be loaded into the user's address space. Thus, nearly all commands are actually obeyed by a program in the user process. The terminal (or pseudo-terminal, in the case of a batch job) is thus in one of two modes: monitor mode when it is inputting a command to the command decoder and user mode when it is inputting to a user or system program in the user process,

DAEMON

Conceptually, DAEMON is a swappable piece of the monitor, It is intended to perform monitor functions that require significant amounts of core space and do not require particularly fast response. Actually, DAEHON runs as one or more jobs detached and in hibernation except when needed. It has privilege to access any files and users' core images and can attach itself to any user's terminal. DAEMON is awakened by the monitor when any job issues an appropriate command or monitor call.

Other privileged jobs

There are other processes which can be regarded as part of the operating system but which, apart from certain privileges, are ordinary jobs (c.f. jobs running under :MANAGER in GEORGE). The batch system is implemented by such jobs - the input spooler sets up data and command files from the user's job description and the batch controller initiates the job with a pseudo teletype, feeding in system commands or program data as appropriate.

User programs

A program consists of a low segment and (optionally) a high segment. The high segment is shareable - in fact it has a global high segment number. There is a monitor call allowing a program to transfer control to another. Thus, the system command to load a program appears to be implemented by the command decoder causing the current core image to obey a monitor call which transfers to the required program.

4. UNIVAC (EXEC)

Most EXEC components operate as activities In a very similar way to user activities. These are known as EXEC workers. The others are those for which switching is either impossible or requires an unacceptable overhead; these interlock processes run in executive mode. There are three levels of interlock process, dealing with:

  1. handling external interrupts - further interrupts locked out.
  2. certain input/output functions
  3. CPU dispatching, system calls (Executive Requests), some i/o.

These transient EXEC mode processes are interrupt driven. All other processes - EXEC workers and user jobs - are selected for the CPU(s) by the dispatcher. The activities are on one of a number of absolute priority queues:

  1. high level EXEC (abort routines)
  2. realtime user
  3. low level EXEC (nearly everything)
  4. time-sharing users
  5. etc

Levels (4) and below are time shared.

Apart from the dispatcher, other EXEC workers are involved with higher level scheduling. It is not clear which components are independent processes.

Dynamic allocator: responsible for store allocation appears to be an independent process serving a queue of core requests, for program loading. swapping etc.

Coarse Scheduler: puts runs into batch queue, processes job control for batch and time-sharing (demand) jobs. It uses the Control Statement Interpreter to interpret the job control.

Facilities Inventory and Selection: allocates resources.

Routines which provide the user interface with i/o facilities are known as symbionts, for some reason.

Memory

There is a permanently resident EXEC area in core, including interrupt locations, locked-down code etc. This is covered by the main I-Bank when an EXEC component is running. Another area, covered by the other I-Bank contains some EXEC segments which are held in core for efficiency reasons and an overlay region which can hold the largest segment. (No EXEC segment requires another to be loaded.) There is a dynamically managed data space pool known as EXPOOL. The data controlling the execution of a user task - registers etc - is contained in the Program Control Table and is swapped out with the program. The precise address space of executive processes are unclear. The whole thing appears to be complicated by the second 256K of store and the strange boundary which exists between them.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site