Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ OverviewIEEE CG&ADCE 1982OR for SUS 1986SUS Assess 1986Apollo Domain
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDSingle User SystemsPERQ PapersExternal
ACDSingle User SystemsPERQ PapersExternal
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
IEEE CG&A
DCE 1982
OR for SUS 1986
SUS Assess 1986
Apollo Domain

Evaluation of Single User Systems

K Robinson

RAL-86-050 June 1986

ABSTRACT

The paper describes an evaluation exercise which took place during 1984 for the procurement of single-user systems (workstations) for the SERC Common Base Programme and the Alvey Directorate. After describing the background, a detailed comparison of three machines - the Apollo DN550, the SUN Microsystems SUN2/l20, and the Whitechapel Computer Works MG-l is presented, together with results for the machine already in the Common Base, the ICL Perq2.

1 PROLOGUE - SINGLE USER SYSTEMS AND THE COMMON BASE POLICY

In 1980 a new type of computing engine became available to the Rutherford Appleton Laboratory in the form of the Three Rivers Computer Corporation's Perq machine. (ICL subsequently acquired manufacturing and marketing rights in the UK and elsewhere). This type of device called a Single User System (SUS), offered not only conventional features such as:

but also the following high quality graphics items:

The combination of effective conventional computing capabilities with the quality graphics meant that a completely new style of human-computer interaction was possible, based on a technique for display organisation known as window management. Other types of single-user machines, such as personal computers, could not offer this style of interface in a truly effective way.

1.1 The Common Base Policy

The Perq at its introduction cost about £20k for the typical configuration above. At this cost level, it was apparent to SERC that a large number of different types of SUS would come onto the market, and that effective support of its research activities could only be maintained by limiting support to a few systems at any time. Only in this way could the familiar problems of software duplication, difficulty in transferring results of research and development from one research group to another, and so on be overcome. SERC's Common Base Policy encapsulated this principle, by defining:

The Common Base Policy was always intended to be a dynamic entity both equipment and software were (and are) expected to change in time. Software developed on the Common Base, both at RAL and elsewhere, would be made available via the Common Base software distribution mechanism.

2 INTRODUCTION

Since Perq was introduced into the Common Base, many developments had taken place in the SUS marketplace, and by 1983 it seemed timely to commence the evaluation process. Some 120 vendors who either had or might have suitable products were sent an outline operational requirement for a SUS suitable for inclusion in the Common Base. Over 80 replies were received, and an analysis of these indicated that 10 suppliers could possibly meet the specification. These suppliers were:

These 10 suppliers were sent a detailed Operational Requirement (OR), described in [1], and the responses showed that three suppliers had equipment suitable for evaluation in the timescale - Apollo (DN550), SUN (SUN2/l20) and Whitechapel (MG-l). At this point, the Alvey Directorate requested that an evaluation of SUSs be undertaken on their behalf, requiring an extension of the exercise. Evaluation equipment was acquired for use at RAL and Edinburgh University (Department of Artificial Intelligence) during the summer and autumn of 1984. ICL's Perq2 was used as a reference machine. Vendors were asked to supply equipment as close as possible to a standard configuration, consisting of a monochrome A3 display, 2 MB of real memory, and 40 MB of disc space.

SUN2/120, MG-1 and the 275th PERQ

SUN2/120, MG-1 and the 275th PERQ
Full image ⇗
© UKRI Science and Technology Facilities Council

The rest of this paper describes the evaluation criteria used; an overview of the evaluation equipment used and the suppliers; quantitative assessments; user assessments; qualitative assessments; an overview of the results; the outcome of the exercise on the Common Base; and finally a description of the major developments in the suppliers' offerings since the evaluation.

Note that b = bit, B = byte, throughout this paper.

3 EVALUATION CRITERIA

A draft document covering evaluation criteria was drawn up, with input from a wide spectrum of user representatives. This document was very detailed and no weightings were applied to the items. Given the requirement the criteria were divided into broad bands; complex schemes for weighting were considered but in practice these seem to be of little benefit except to inject a degree of (possibly spurious) objectivity.

Items considered to be of major importance were:

  1. processor power (including floating point)
  2. interactive graphics (including rasterop, colour, input devices)
  3. virtual memory management
  4. UNIX and system software
  5. communications
  6. company viability
  7. cost

Items of next rank were:

  1. software exploiting SUS features effectively
  2. documentation
  3. ergonomics and environment
  4. interfaces
  5. compilation speed

Items at the lowest rank were:

  1. software not using SUS features
  2. hardware options (range of discs, tapes, etc)

It should be stressed that this was seen as a relative priority: items at the lowest rank were still important. The basic thinking behind the categories was that if the first set was not reasonably satisfied, then the equipment could not support the style of working required; given that the first set was satisfied, then the second set could be used to differentiate the equipment by investigating further criteria, such as the use made of SUS features, and so on.

4 EVALUATION EQUIPMENT AND COMPANY BACKGROUND

All prices in this section are November 1984 list and do not include VAT.

4.1 Apollo

4.1.1 Evaluation equipment

Apollo tendered a DN550 against the OR. This equipment consisted of

£k
Motorola 68010 cpu, 1024 ×800 landscape colour display, 3 MB memory 42
mouse, PEB, Multibus 8
50 MB disc and streamer tape 11
Ethernet gateway 3
X25 gateway 6
AUX, F77, C, Pascal 4
TOTAL 74

The PEB (= Performance Enhancement Board) provides much faster floating point. AUX is the Apollo implementation of UNIX System III.

In the event, a DN550 with disc was not available and the proprietary Domain local area network was used to connect to the disc on a loaned DN660. Apollo were satisfied that performance would not be significantly impaired (nor significantly enhanced!)

The DN550 was part of a range of machines. The table below gives an overview of the range at the time, with costs for a 2 MB memory + 40 MB disc machine, or closest equivalent, with floating point unit, but excluding the PNA or Multibus connections (needed for large Winchesters or SMD discs, or communications).

Model CPU Screen
horiz × vert
Colour
/Mono
Memory
Min - Max (MB)
Disc Options (MB) Cost (£k) Mtce (£k) Comment
DN300 68010 1024 × 800 M 1 - 3 34,70 31 2.7 No fl pt unit
DN320 68010 1024 × 800 M 1 - 3 34, 70 38 3.6 Poorer graphics
DN460 bitslice 1024 × 800 M 1 - 4 80, 167 57 5.1 1 MIP processor
DN550 68010 1024 × 800 C 1 - 3 50 50 4.0 1.5 MIP Processor
DN660 bitslice 1024 × 1024 C 1 - 4 80,167 72 6.4 1 MIP Processor

A file server (DSP80) was also available with one or two 500 MB Winchester drives at £36k or £57k. PNA or Multibus connections were available at £3k; these allowed connection of the 500MB Winchesters (£25k). X25 and Ethernet gateways required a PNA and cost £7k and £4k respectively.

4.1.2 Company background

The company, US owned and originated, was founded in 1980, mainly by ex-Prime senior management. The UK subsidiary was formed in 1981. By mid-1984, there were about 1600 staff world-wide, 45 in the UK (expected to rise to 80 by the end of the year). There was no reliance on external finance. Some 6000 workstations had been shipped, 335 in UK; this latter figure reached 600 by the end of 1984. A European repair facility for Apollo had been set up at Livingston, Scotland.

Apollo's hardware testing was extensive, both at board and system level. Both environmental and electrical testing is performed. Major increases in manufacturing capacity were scheduled to take place during 1985: at the time of the evaluation Apollo could not deliver as many systems as required, with delivery times of the order of 3-4 months. All parts were dual-sourced.

Support in the UK was centred on four offices, Livingston, Manchester, Milton Keynes and Richmond. Two further offices were planned, at Birmingham and Leeds. Hardware support was available on standard maintenance contract, time and materials, or return to factory bases. Response time to hardware problems was within 8 working hours if less than 100 miles from a support centre, or by arrangement otherwise. Reduced support costs were possible for a number of nodes at any single site.

4.2 SUN Microsystems

4.2.1 Evaluation equipment

SUN offered a SUN 2/120 against the OR, comprising

£k
Motorola 68010 cpu, 1152 ×900 landscape monochrome display, 2 MB memory, Ethernet interface, Multibus 20
42 MB disc and streamer tape 8
Floating point processor 3
BSD 4.2, F77, Pascal 0
TOTAL 31

The attached colour controller and monitor (640 × 480 resolution) costed £7k. Options of 130 MB and 260 MB Winchester discs were available, at £13k and £18k respectively. The SUN 2/170, a rack-mounted version offered at a similar price, had 130 MB and 380 MB discs available as well as SMD drives via the Multibus connections. Discless nodes could be supported in a limited way, by reserving a partition on a disc for a specific node. No inter-partition transfer was possible.

4.2.2 Company background

SUN Microsystems was founded in February 1982. In mid-1984 it employed some 560 staff (25 in Europe) and expected to employ about 950 by June 1985 (about 70 in Europe). The SUN2 was introduced in November 1984 and proved to be very popular, overcoming the SUN1's performance and reliability problems. Manufacturing was wholly US-based, with a further assembly/test facility at Frankfurt for European operations. Machines were tested, both at the board and system levels, and considerable new capacity was corning on stream, mainly for the planned 2/50, expected to be a major seller. All components were dual-sourced, with the exception of the Ethernet and rasterop chips. Delivery on 2/120 systems was 3 to 4 months.

The UK operation, based in Ascot, employed some 12 staff at the time of the evaluation; this figure was expected to rise to 20 by mid 1985, with new support offices at Manchester and Edinburgh. Next-day response was offered (at 8% of purchase price for hardware and software maintenance), with a 4 hour response promised by Spring of 1985. Discount arrangements for multiple cpus on one site were possible.

4.3 Whitechapel Computer Works

4.3.1 Evaluation Equipment
£k
National Semiconductor 32016, 1024 × 800 landscape monochrome display, 2MB memory, 40 MB disc, Floating point unit 9.0
GENIX (BSD 4.1), C 0.0
Pascal, F77 1.0
Ethernet 0.5
TOTAL 10.5
4.3.2 Company background

Whitechapel Computer Works was founded in April 1983. The MG-1 (the MG stands for Milliard Gargantubrain, for followers of The Hitchhikers Guide to the Galaxy; we await the MG-42 with interest) aimed to provide PC style cost and volume with SUS performance. Start-up finance was provided by the Greater London Enterprise Board and DTI. A further round of funding had been obtained from the City. 40 or so staff were employed in 1984, all in the Whitechapel offices.

Full environmental testing was not undertaken, but was planned. Most components were dual-sourced with the exceptions of the 32016 chip set; the Rodyne disc (a standard SCSI offering); the Ethernet and DMA chips (AMD produced, but also to be available from Mostek); the MC68l2l; and the NEC disc controller (to be second-sourced). Delivery times were 1 to 3 months, depending on volume.

Support was concentrated in London, with a Manchester office planned.

4.4 ICL

Although the Perq2 was already in the Common Base, this section is included for comparison.

4.4.1 Evaluation Equipment
£k
ICL Perq2, bit-sliced, under-microcodeable cpu, 2 MB memory, 1280 × 1024 monochrome landscape display, 36MB disc, keyboard, mouse, Ethernet connection, (Unix, Fortran 77, Pascal, C, GKS 6
X25 Connection 2
TOTAL 47
4.4.2 Company Background

ICL is the largest UK-owned computer company, offering a wide range of machines from micros to large mainframes. The segment of the company involved with SUSs is the Perq Business Centre, with about 100 staff. ICL have a production and marketing arrangement with the original designers of the Perq1, the Three Rivers Computer Corporation (later Perq Systems Corporation) of Pittsburgh, USA. (The US company offered a different operating system, POS, with a completely different instruction set.) Support for the Perq in the UK was via the normal ICL service centres. Delivery times were about 6 weeks.

5. QUANTITATIVE ASSESSMENTS

(A detailed list of the benchmarks and results is given in the Appendix, numbered as the sub-sections in this section.) The results were normalised by dividing the time for each benchmark by the fastest time on that benchmark; thus the fastest machine had a rating of 1, and all the others had ratings of greater than one. All appropriate optimisation methods were used. Note that the Apollo does not provide separate system and user times; these have been added together for the other systems to provide a reasonable comparison.

5.1 Processor

5.1.1 Integer operations

The results from the various areas summarised as follows:

Function Apollo Sun WCW ICL
16 bit integer 1.0 1.1 1.7 3.1
32 bit integer 1.6 1.0 1.3 1.3
16 bit loops and branches 1.0 1.1 2.0 4.7
32 bit loops and branches 1.0 1.1 2.1 2.3
Eratosthenes' sieve 2.4 1.0 1.8 2.3

Overall, the SUN2 was the fastest, with the Apollo close. The ICL machine was very slow on 16 bit operations.

5.1. 2 System operations
Function Apollo Sun WCW ICL
procedure call 2.5 1.0 2.9 1.8
assign character, field; getpid etc 3.2 1.0 2.0 1.8
process overheads (kashtan) ? 2.1 1.2 2.1 1.0

The Apollo failed to execute correctly one of the Kashtan benchmarks. Overall, the SUN2 was fastest, with Perq some 40% slower and both the WCW and the Apollo over twice as slow.

5.1.3 Floating point operations
Function Apollo Sun WCW ICL
Single precision 1.2 1.0 1.0 2.0
Double precision 1.4 1.5 1.0 2.8
Matrix inversion coefficient 1.0 1.8 1.4 2.5

The Whitechape1 performed best, with Apollo close and the SUN2 slightly slower again. The Perq2's microcoded floating point was about half the speed of the others.

5.2 Disc Performance

Function Apollo Sun WCW ICL
sequential writes of 512B each 1.6 1.0 2.2 6.1
sequential reads of 512B each 1.0 2.5 3.7 7.3
random reads of 512B each 2.6 1.0 3.3 5.3

Overall, the SUN and Apollo were comparable on this benchmark, with the Whitechapel roughly twice as slow and the Perq over three times slower.

A sequence of benchmarks using a range of blocksizes from 1B to 2KB showed the following:

Function Apollo Sun WCW ICL
Maximum transfer rate 3.7 1.0 5.0 4.9
Blocksize (B) at 90% maximum transfer 320 1300 350 610

The SUN2 was much faster than the rest at maximum speed (some 200 KBps) by about a factor of four (even the DN660 could only manage 100 KBps), while still performing well at smaller blocksizes, as evidenced by the earlier benchmarks.

5.3 Virtual Memory Management

The impact of real memory sizes must be borne in mind here. The RAL Apollo (DN550) ran most benchmarks with 3 MB, although one memory board was removed to provide evidence of its VM capabilities with 1.5 MB main memory. At Edinburgh, the SUN machine was supplied with 4 MB main memory, although the memory available could be changed by software. The Edinburgh Apollo (DN320) had 1.5MB main memory, and its Perq1 had 1 MB. Interpretation was hence difficult, and further clouded by the differing data page sizes - 1KB (Apollo); 2KB (SUN); 5l2B (physical), 1KB (logical) WCW; 128 KB (ICL).

The first benchmark (from the AIM suite) was in 4 parts (Apollo with 3 MB memory):

Function Apollo Sun WCW ICL
sequential write of 1 MB 2.8 1.0 2.7
sequential read of 5 MBV 1.4 1.0 1.7
random write of 1 MB 3.3 1.0 1.3
random read of 5 MB 1.8 1.0 4.9
Average of results 2.3 1.0 2.7

The second benchmark wrote and read a variety of array sizes (0.25 to 4 MB) sequentially, and then read back randomly 1 KB over the array (results for 3 MB array shown):

Function 3 MB Apollo 1.5 MB Apollo Sun WCW ICL
sequential write of 3 MB 1.7 1.6 1.0 1.4 1.0
sequential read of 3MB 1.0 1.1 1.8 2.4 1.2
random read of 1 KB 1.0 4.9 44.3 36.0 216.8

Initialising the arrays in a spiral produced the following results from 3MB array size:

Function 3 MB Apollo 1.5 MB Apollo Sun WCW ICL
spiral write of 3 MB 1.0 1.2
spiral read of 3MB 1.3 1.0

At this array size, the SUN 2, the 1.5 MB Apollo and the Perq failed to complete in a reasonable time.

More detail on these last two sets of benchmarks, including a range of array sizes, is given in Appendix A5.3.

Overall, the performance can be summarized as follows. The SUN2 performed well except when random access over large amounts of virtual memory (in excess of real memory size) was necessary - the first benchmark in this section pointed this up, as the random read of 5 MB was over 1 MB areas (which can therefore be expected to be swapped in), while the other tests were over larger amounts - up to 4 MB. Both the Apollo and the Whitechapel machines performed well. The Perq performed badly, as can be expected given the very large page size, on large arrays and often failed to complete in a sensible time.

5.4 Compilation System

5.4.1 Compilation speeds
Function Apollo Sun WCW ICL Comment
Fortran 77 1.0 2.6 3.8 FPTEST Program
Pascal 1.0 3.1 4.4 Treemeta compiler compiler
C 1.6 1.0 2.2 1.7 C-Prolog

Only the Apollo managed to compile the full 12000 line FPTEST program successfully. Both the SUN2 and the WCW gave spurious compiler errors; the Perq overflowed its symbol table.

5.4.2 Mathematics Library Performance
Function Apollo Sun WCW ICL
Fortran 77 (Single) 1.0 4.3 3.9 4.9
Fortran 77 (Double) 1.0 7.9 3.6 18.3
Fortran 77 (complex) 1.0 5.3 2.4 3.7
Pascal 1.0 8.8 12.7
C 1.0 4.2 2.0 6.5

The figures were averaged over six functions (exp, log, square root, sine, hyperbolic tangent, inverse cosine); more detail is given in Appendix A5.4. As can be seen, the Apollo greatly outperformed the others, with the Whitechapel next at between two and four times slower, with the SUN and Perq roughly a further factor of two slower again.

5.5 Graphics

5.5.1 Hardware and Software Overview

An overview of the hardware and its claimed basic performance is given in the following table:

Function Apollo Sun WCW ICL
screen horizontal 1024 1152 1024 1280
screen vertical 800 900 800 1024
screen resolution (pixels/inch) 73 80 80 94
refresh rate (I=interlaced) I 40 66 57 60
no bit planes for colour 8 (8)
cursor size 162 162 642 642
rasterop speed (M pixels/s) 20 8 32
display memory bandwidth (Mbps) 320 48 200 44

SUN gave no speed for their rasterop chip, which in any case only works on display (not user) memory. In practice it seems to be little used, the main processor supplying the necessary functionality via software. Only Apollo offered an integrated colour display; the SUN device is lower resolution (640 × 480) and is of lower performance.

Graphics primitives supported are shown below:

Function Apollo Sun WCW ICL
line * * * *
text * * * *
rasterop * * * *
area fill * *
polyline *
spline *
circle * *
arc * *

Only the Perq offered GKS at the time of the evaluation, although all the other manufacturers were in the process of supplying and supporting it.

5.5.2 Graphics benchmarks

Line drawing (a total of 24 M pixels) results were as follows:

Function Apollo Sun WCW ICL
line 1.0 1.4 1.2
clipped line 1.0 1.5 6.3 1.6
line in window 1.2 3.8 9.3 1.0
clipped line in window 1.0 3.2 7.9 1.1

Apollo and Perq were comparable, with the SUN about three times slower when the window manager was used. The Whitechapel was much slower - by about a factor of 8.

Function Apollo Sun WCW ICL
character output 1.2 3.3 1.0 1.4

The SUN was rather slower than the others, by about a factor of 3.

Rasterop performance was assessed in two ways - firstly by writing about 650 Mpixels in randomly sized rectangles (both with and without clipping), and secondly by scrolling a page of text a pixel width at a time, both horizontally and vertically.

Function Apollo Sun WCW ICL
rasterop 1.4 8.1 1.0
rasterop with clipping 1.2 7.3 21.7 1.0
scroll left 1.0 8.0 67.0 1.1
scroll up 1.2 3.7 60.6 1.0

The value of specialist rasterop hardware can be seen here - both the Apollo and the Perq had comparable performance, with the SUN averaging about seven times slower. The Whitechapel's very poor performance represented the state of the software, rather than the basic hardware.

5.6 Inter Process Communication

Only the Apollo and the SUN supported an IPC mechanism, the former via shared memory and the latter by the Berkeley 4.2 socket mechanism. (Note that IPC is different from the pipe mechanism: processes communicating via pipes must have common ancestry.) A simple benchmark consisting of 10000 round-trip messages, with a range of message sizes from 1B to 2032B, showed that the SUN was 80% faster than the Apollo in general. More detail is given in Appendix A5.6.

6 USER ASSESSMENTS

In this section, results from three main activities are described - the IKBS evaluation done by the Department of Artificial Intelligence, Edinburgh University; the CAD programs ported by Technology Division, RAL; and the Spy screen editor port, by the SUS development team at RAL.

6.1 IKBS Evaluation

A number of activities were undertaken as part of the evaluation, although only those relating to C-prolog and the CHAT-80 applications system (written in Prolog) are described here. Results for the 2MB memory SUN are presented, to aid comparison with the Apollo DN320 (1.5MB) and Perq1 (1MB) systems used at Edinburgh.

Function Apollo Sun WCW Perq1
C-Prolog make time 1.6 1.0 2.2 1.7
LIPS rating 1.6 1.0 1.6 2.0
CHAT-80 setup (cpu) 1.8 1.0 1.7 2.3
CHAT-80 setup (elapsed) 1.0 1.1 1.0 22.8
CHAT-80 Q1 1.7 1.0 1.6 2.5
CHAT-80 Q2 1.7 1.0 1.6
CHAT-80 Q3 1.6 1.0 1.6 2.5

CHAT80 questions:

Q1: Does Afghanistan border China?

Q2: What percentage of countries border each ocean?

Q3: Which country bordering the Mediterranean borders a country that is bordered by a country whose population exceeds the population of India?

As can be seen, the SUN was consistently the fastest, with the Apollo and Whitechapel slower by about 60 to 70%.

Major comments by the team were:

  1. the Apollo user interface was preferred, the multi-window debugging system in particular providing a good development environment;
  2. the SUN had no major virtues or vices, and performed fastest;
  3. the Whitechapel was at too early a stage in its development to evaluate, with no graphics support or window manager; however the price and performance impressed.

CAD Programs

The first program ported as part of this exercise was a medium-sized Finite Difference program to integrate three partial differential equations. The program consisted of about 3000 lines of Fortran, with few comments, containing single and double precision floating point arithmetic. The program should have terminated after 36 time-steps with a negative square root error. Results were as follows (all default options used, figures in brackets refer to notes following table):

Function Apollo Sun WCW Perq2
compile 1.0 2.6 (5) 3.8
link (1) 1.0 5.0 15.0
step number at error (2) (3) (4)
run time for 36 steps 1.0 1.4 2.2
  1. Apollo performs a dynamic link of libraries at load time.
  2. DN660 gave a floating point error at step 36!
  3. Program hung the machine; running the system without the Sky floating point accelerator gave a program termination at step 38.
  4. On a Perq1, the program terminated correctly at step 36, with an accurate error indication (!)
  5. The Whitechapel Fortran compiler was not available at the time these benchmarks were run.

The Apollo compilation and run-time system showed up well here. At run-time the SUN2 was rather slower, and the Perq2 over twice as slow.

The second program, written in Fortran 77 with some C routines to provide the graphics support, generated an animation sequence for post-processing of results from a CAD program. Rasterop and integer arithmetic were used extensively. Run time performance was measured by the number of frames plotted per minute.

Function Apollo Sun WCW ICL
1.0 5.5 2.0

Overall comments were: the Apollo's UNIX was not UNIX (writes to Unit 0 not allowed); and the SUN2's graphics performance was not adequate and would not support effective animation.

6.3 Spy Screen Editor

The Spy program consists of about 15000 lines of C, of which about 15% needed changing for machine-specific functions. The program is highly interactive, making extensive use of the mouse and instant feedback: it is thus an acid test of any system's abilities to provide a good man-machine interface, required on devices of the type sought.

The program had requirements in five functional areas; these areas, together with comments on the various machines, were:

  1. Line drawing and rasterop: No major problems found.
  2. Text drawing and fonts: Spy is restrictive in its use in these areas, and no system gave problems.
  3. Mouse tracking: Both the SUN and the Apollo only signalled changes to the mouse state; it is thus possible that the screen could become out of synchronisation with the program state if a mouse event was lost. Both the Whitechapel and the Perq passed full mouse status.
  4. Window management: The Apollo window manager placed very severe constraints on mouse tracking and output to windows, with the result that the port would have taken at least 3 man-months; given the effort available, it was decided not to proceed with the exercise. With the SUN 2 , the window manager was felt to be poorly structured, with three entry levels into the software, and required functionality being unavailable in any one level. The port was done, however. The Whitechapel machine had no window manager available, but discussions with Whitechapel staff indicated that the port should be relatively straightforward.
  5. Cursors: The Apollo and SUN only offer 16 by 16 pixel cursors, felt to be rather restrictive.

One side effect of the SUN2's more modern operating system (BSD 4.2) was that certain functions were much more easily handled and as a result the code size shrank. In use, the SUN2 version was much less smooth in operation, and could be visually disturbing.

Following the SUN2 port, a series of benchmarks was run on a standard SPY window (29 lines by 79 characters), with the Spy help file (490 lines long) as the input file. On functions which needed graphics power, such as scrolling or screen redraws, the Perq was about twice as fast as the SUN2. Character operations such as replace, delete etc - were about 40% slower on the Perq than on the SUN2. Both these results might be qualitatively expected given the benchmark results in Section 5. (More detailed results can be found in Appendix A6.3.)

7 QUALITATIVE ASSESSMENTS

7.1 Basic Architecture Considerations

The Apollo DN550 and the SUN2 use the Motorola 68010 chip, while the Whitechapel offers the National Semiconductor 32016. Both chips provide position-independent code and good addressing schemes. The 32016 also supports dynamic linking of separately compiled modules, although the cost and benefits of this are difficult to quantify.

In terms of the instruction sets provided, both are rich, with the 68010 in particular having auto increment/decrement mode (useful for multiple stack management): these could be emulated cheaply on the 32016. The 32016 set provides particularly we1l-integrated floating point, executed by a co-processor (note both the Apollo and the SUN2 provided separate floating point boards for good performance), and better support for debug and trace than the 68010. The Perq2's instruction set is poor.

On the bit mapped graphics side, all systems provide integrated memory mapping and graphics on bit-maps (however only the Whitechapel reflects this at the applications level). Both the Apollo and the SUN2 use a frame buffer to store the screen image, as distinct from the Whitechapel and Perq where there is no distinction between main memory and display memory. The SUN2's rasterop chip only works on display memory, resulting in reduced graphics performance caused by the need to provide rasterop functions on main memory by software. The SUN2 is noticeably less smooth on scrolling; further, screen updates after window movement are slow enough for the updating algorithm to be seen at work, unlike the Perq and Apollo, which are essentially instantaneous in operation.

7.2 Basic Software

7.2.1 UNIX

All the machines offered different versions of UNIX. The SUN2's was the most advanced - Berkeley BSD 4.2 - while the Whitechapel offered GENIX, essentially Berkeley BSD 4.1, the major differences from the BSD 4.1 being different virtual memory management, scheduling and swapping methods. Perq offered V7 UNIX with some System III extensions such as SCCS. The Apollo provided an implementation of System IlI on the Aegis kernel (Aegis being a proprietary operating system); this implementation was in general good, but with some surprising omissions (such as the PS command), and some irritating deviations, especially for the device driver writers, who must integrate device drivers in a non (UNIX) standard way. This has particular implications for communications work (see Section 7.3). (It is probably fair to say that Aegis offers better functionality than UNIX, but this is no advantage to us as UNIX is a requirement for portability reasons.)

With the compilers and compilation system, the Apollo provided the most developed system overall, although the SUN2's C compiler was much faster, and its Pascal compiler preferred for its much improved diagnostics and lack of non-standard extensions. The Perq compilers were slow.

7.2.2 Window Management

Possibly the greatest differences found between the machines was in the window management area. The Apollo window manager was built in at a very low level in the system and had some attractive features, such as the history mechanism. However in terms of implementing SUS-oriented software there were some very real drawbacks, in particular the difficulty of outputting to windows unless they were totally uncovered (making the building of a complex picture in the background almost impossible, for example). Tracking of the mouse by the applications program was also difficult, and this was further compounded by the inability of the system to track rapid mouse movement effectively by the screen cursor.

The SUN window manager was adequate, but appeared to suffer from inadequate specification, with functionality being spread over three levels of software. There were separate procedures for writing to main memory and the display memory, a surprising choice. Like the Apollo, rapid movement of the mouse could take some time to be reflected on the display. Most things could be done with the SUN however, unlike the Apollo.

The Whitechapel at the time had no window manager; however the design documents showed that it had the capability of being the best of the three systems under evaluation. First release was scheduled for December 1984.

(As an aside, it is depressing to note that after all that has been said about the Perq window manager, at the time it was still probably the best available in a UNIX environment.)

7.2.3 Debugging aids

Only the Apollo offered an integrated, multi-window debugging and process profiling systems. The SUN had DBX (a source level debugger) and a system profiling utility. Whitechapel provided merely an assembler-level debugger (DDT). Perq had SDB.

7.3 Communications

By and large, only the Perq offered useful communications in our environment. The Apollo outlook was dominated by its efficient, well-integrated, but proprietary Domain local area network which is the back-bone of its distributed operating system. The Domain, an 11 Mbps token ring LAN, is a proprietary system; as a result Apollo tended to view the rest of the world as accessible by gateways on the Domain rather than by direct LAN or WAN connections to individual nodes.

7.3.1 X25 Wide Area Network.

The Apollo offering was an X25 connection targeted at connecting separate Domain LANs; it was expensive (£6k + VAT). Implementing the Coloured Books was made more difficult by the non-standard UNIX device driver interface; in particular multiplexing of input/output to the Multibus channel needed user programming. SUN had an X25 board under test. Whitechapel preferred an in-board solution, but initially were going for the York front-end box. Perq already offered X25 via the York box. Crude estimates of effort involved in a Coloured Book implementation were: 1 man year for Apollo, 6 man months for SUN, and 6 man months for Whitechapel. All three suppliers were prepared to commit to Coloured Book X25 connections if a sufficiently large order was placed.

7.3.2 Ethernet

Both Apollo and SUN provided the DARPA protocols TCP/IP over (different) basic Ethernet specifications. (These did in fact communicate well, the only problem occurring when an Apollo software upgrade caused a connection on the SUN to become loose.) The Apollo connection was expensive (£3.5k), while the SUN, Whitechapel and Perq2 connections cost some hundreds of pounds. Only ICL offered any protocol of interest - ECMA 72 Class 4 (soon to migrate to ISO Class 4).

7.3.3 Cambridge Ring

There was no commercial interest in the Cambridge Ring by any of the companies.

7.3.4 Newcastle Connection

No company, other than ICL, had committed to the provision of Newcastle Connection over Ethernet. The work needed to port to the SUN and Whitechapel was estimated to be relatively small; however the Apollo port was a much more significant piece of work and the RAL estimate for effort required was at least one man year.

Applications Software

The Apollo had the widest range of applications software, mostly in the CAD area, with about 150 suppliers. SUN had about 50 third-party vendors, again mainly in the CAD area; this number was rapidly increasing. Whitechapel, as a new venture, had little software but were both generating much interest in the software supply industry (because of the low price and good specification) and actively in contact with many vendors. [A list of applications software available from each of the machines was available for the assessment, but is not reproduced here.]

Documentation

Documentation available from the suppliers was as follows:

7.5.1 Apollo
7.5.2 SUN

A series of tutorial documents ('White Papers') are also planned. Topics include: Unix; Window Management; Communications; and Graphics.

7.5.3 Whitechapel

8 RESULTS OVERVIEW

8.1 Apollo DN550

Sound basic hardware, but did not significantly outperform the competition except in graphics performance, where it was comparable to Perq, and much faster than the SUN or the Whitechapel. The UNIX implementation had some ragged edges (more noticeable to the system programmer). The window manager could not support a modern graphics-oriented interface, although it had other features of merit. There was more software available for the development programmer, and much more for the end-user. It was very expensive £74k (inc VAT) for a reasonable configuration, with no communications.

8.2 SUN 2/120

Good basic hardware - faster than the others in many respects - but marred by poor graphics performance. It had a 'state of art' UNIX implementation, and an adequate Window manager. There was little aid (above the standard UNIX software) for program development (although this was expected to change in 1985 with the next major software release). The applications software base was sizeable and growing fast, though significantly smaller than the Apollo. Medium price - around £30k.

8.3 Whitechapel MG-l

The basic hardware impressed, more so when the price (around £10k) was taken into consideration. The lack of software - no window manager or even Pascal - meant that the system was not in a state for delivery to general users. Would be of much more interest 6-12 months after the evaluation, when the product had matured.

9 THE OUTCOME

No one machine met all the requirements. The SUN2 was a good all-rounder, with its major deficiency in graphics performance; however it could be seen that the development of the Motorola 68000 series would make significant improvements in that area. As a consequence, the SUN2 was chosen both as an Alvey Infrastructure SUS and for the SERC Common Base Programme.

10 DEVELOPMENTS SINCE THE EVALUATION

Since the end of the evaluation exercise (January 1985) further products have obviously become available, and the product line for most companies has developed considerably. This section gives a brief overview of these developments. Increases in disc sizes have not been covered; processor developments and software improvements have been the main items of interest.

10.1 Apollo

New processors since the evaluation use the Motorola MC68020 cpu and MC68881 floating point chips. These include the DN330 monochrome and DN560 colour nodes, and early in 1986 the DN3000M and C (monochrome and colour) 'personal workstations' were announced. The 3000 series offer PC-AT and PC-XT slots. Two high performance graphics workstations (with specialist graphics processors), the DN570 and DN580, are also available. All these colour machines now operate with a 60Hz non-interlaced display. File, print and compute servers are all available.

On the software front, a twin port of the AT&T System V Release 2 and Berkeley 4.2 UNIX systems is now available (as well as the proprietary AEGIS system). GKS level 2b, X25 'Coloured Book' communications, and the Domain Software Engineering Environment are all available. The applications software list now has some 450 entries.

10.2 SUN Microsystems

During the early part of 1985 SUN Microsystems introduced the SUN 2/50 (a cheap, single-board, discless equivalent of the 2/120) and an integrated colour machine, the 2/160. The later part of the year and the beginning of 1986 saw the following family of SUN3 machines, all Motorola 68020/68881 based:

Floating point accelerators, about three times faster than the MC6888l, are available on the 3/160 and 3/180 machines.

Much system software has been added since the evaluation, including the SunPro (Sun Programming Environment) and SunView (Sun Visual/Integrated Environment for Workstations) systems. SunPro provides an extended UNIX programming environment (integrated symbolic debugging, text editing etc), and SunView, a 'construction kit' for user interfaces. GKS and Common Lisp are also available. The applications software list is now extensive, with about 350 entries.

SUN's network File System provides a distributed file system for an (Ether) network of SUNs and other machines - the NFS definition has been made public domain. SUN and AT&T have agreed an eventual convergence of their two flavours of UNIX. Newcastle Connection is also available.

10.3 Whitechapel Computer Works

During 1985 major developments took place, mainly on the software front. A factor of 6 improvement in graphics performance was obtained, and the Oriel state-of-the-art window manager developed. The Newcastle Connection and SUN's NFS have also been announced as products, available on 42NIX, the Whitechapel release of Berkeley BSD 4.2. A small but healthily growing set of applications have been produced. In early 1986 a medium resolution colour graphics system, the CG-l, was announced, followed by a discless node later in the year.

10.4 ICL

Major developments since the evaluation have been the introduction of servers (accessed via the Newcastle Connection) for large discs, half-inch magnetic tape, and cartridge tape. The number of applications available has grown relatively slowly.

ACKNOWLEDGEMENTS

Too many people contributed to the exercise to thank everyone individually. The major part of the work was undertaken by the Common Base Programme team in Computing Division at RAL, but with significant help from colleagues in Software Engineering, IKBS and Communications within the Division. Thanks are also due to Bryan Colyer of Computing Applications Group in Technology Division at RAL, and Robert Rae and his co-workers in the Department of Artificial Intelligence at the University of Edinburgh.

Responsibility for errors or omissions is, of course, the author's.

REFERENCES

  • 1
  • (1) Prosser C, Robinson K, and Williams A S: An Operational Requirement for Assessing Single User Systems, RAL Report RAL-86-028, April 1986.

    APPENDIX: Detailed Benchmark Results

    A5.1.1 - Processor Performance

    16 bit integer operations
    Function Apollo Sun WCW ICL
    add 1.0 1.2 2.0 4.5
    subtract 1.0 1.2 2.0 4.6
    multiply 1.1 1.0 2.1 3.7
    divide 1.1 1.0 1.5 2.6
    assign literal 1.0 1.0 1.8 4.4
    assign integer 1.0 1.0 1.7 4.8
    mod 1.0 2.0 1.4 2.3
    assign array 1.0 1.3 2.5 4.9
    k/k*k + k-k 2.1 1.0 1.7 2.7
    1/2*3-4+5 1.3 1.0 1.8 2.4
    assign pointer 1.0 1.2 2.1 2.5
    increment pointer 1.0 1.4 2.6 3.0
    32 bit integer operations
    Function Apollo Sun WCW ICL
    add 1.0 1.1 1.9 1.5
    subtract 1.0 1.1 1.9 1.5
    multiply 2.1 1.4 1.2 1.0
    divide 2.3 1.5 1.1 1.0
    assign literal 1.0 1.0 1.8 1.8
    assign integer 1.0 1.2 2.1 1.9
    mod 7.7 1.5 1.1 1.0
    assign array 1.0 1.3 1.9 2.6
    k/k*k + k-k 2.5 1.6 1.1 1.0
    1/2*3-4+5 2.9 1.0 1.2 1.0
    assign pointer 1.0 1.0 1.8 1.7
    increment pointer 1.0 1.4 2.6 2.9
    16 bit loops and branches
    Function Apollo Sun WCW ICL
    for 1.0 1.2 2.2 5.1
    while 1.0 1.0 1.9 4.4
    repeat 1.0 1.3 2.4 6.2
    if..then..else
    + integer assign
    1.1 1.0 1.8 4.1
    idem, with 1 then: 999 else 1.0 1.0 1.8 4.1
    32 bit loops and branches
    Function Apollo Sun WCW ICL
    for 1.0 1.2 2.2 5.1
    while 1.0 1.0 1.9 4.4
    repeat 1.0 1.3 2.4 6.2
    if..then..else
    + integer assign
    1.1 1.0 1.8 4.1
    idem, with 1 then: 999 else 1.0 1.1 2.0 1.8
    Eratosthenes' sieve
    Function Apollo Sun WCW ICL
    Eratosthenes' sieve 2.4 1.0 1.8 2.3

    A5.1.2

    Function Apollo Sun WCW ICL
    Procedure Call
    5 nested calls, parameterless 1.1 1.0 2.4 1.6
    5 nested calls, each passing 16 bit integer 2.4 1.0 2.3 1.8
    5 nested calls, each passing 32 bit integer 2.1 1.0 2.1 1.3
    5 nested calls, each passing 16 bit integer address 2.3 1.0 2.5 1.6
    5 nested calls, each passing 32 bit integer address 2.2 1.0 2.4 1.4
    5 nested calls, last assigning integer 2.4 1.0 2.6 1.4
    Other
    assign character 1.0 1.2 2.0 3.5
    assign field 3.4 1.0 2.0 1.9
    get pid 7.1 3.0 5.6 1.0
    stat (buf and statbuf) 3.2 1.0 1.1 2.5
    open and close file 4.1 1.0 1.8 2.4
    Process overheads (kashtan)
    two processes signalling each other 1.0 2.4 1.5
    process signalling itself 2.0 1.4 2.5 1.0
    process writing to itself via pipe 3.8 1.6 2.2 1.0
    process to process via pipe 2.3 1.3 1.9 1.0
    processes transmitting to each other 3.4 1.4 2.7 1.0

    A5.1.3 Floating Point Operations

    Function Apollo Sun WCW ICL
    single precision add 1.2 1.0 1.2 1.6
    subtract 1.3 1.0 1.1 1.6
    multiply 1.3 1.0 1.1 2.8
    divide 1.2 1.2 1.0 2.3
    double precision add 1.3 1.2 1.0 1.2
    double precision subtract 1.3 1.2 1.0 1.3
    double precision multiply 1.6 1.5 1.0 4.7
    double precision divide 1.5 2.0 1.0 4.0
    matrix inversion 1.0 1.8 1.4 2.5

    Matrix inversion: for 10 by 10, 20 by 20 up to 50 by 50 matrices. Opcount proportional to (rank of matrix)**3. Coefficient of proportionality calculated for each machine, giving ratios above.

    A5.3 Virtual Memory Management

    Figures with a preceding asterisk indicate when the array size equals the real memory available.

    Array size is given in Mbytes.

    Array Size (MB) 3MB Apollo 1.5MB Apollo 2MB Sun 2MB WCW 2MB ICL
    Sequential write
    0.5 1.5 1.5 1.0 2.0 1.9
    1.0 1.2 1.2 1.0 1.6 1.5
    1.5 1.1 *1.4 1.0 1.5 1.4
    2.0 1.0 1.2 *1.1 *2.0 *1.4
    3.0 *1.5 1.5 1.0 1.4 1.0
    4.0 1.4 1.5 1.0 1.2
    Sequential read
    0.5 1.7 1.7 1.0 2.0 2.6
    1.0 1.3 1.3 1.0 1.5 1.9
    1.5 1.0 *1.0 1.5 1.5 1.6
    2.0 1.0 1.8 *1.9 *2.1 *1.7
    3.0 *1.0 1.1 1.8 2.4 1.2
    4.0 1.1 1.0 1.5 2.3
    Random read
    0.5 4.0 3.9 1.0 1.0 1.7
    1.0 3.4 3.4 2.6 1.0 1.4
    1.5 1.0 *1.0 25.2 22.,1 9.1
    2.0 1.0 4.9 *75.0 *70.3 *340.2
    3.0 *1.0 4.9 44.3 36.0 216.8
    4.0 1.0 1.1 8.7 4.4
    Spiral write
    0.5 1.4 1.4 1.0 1.7 1.5
    1.0 1.2 1.2 1.0 1.5 1.3
    1.5 1.6 *1.7 24.7 1.1 1.0
    2.0 1.0 11.7 * *1.4 *19.5
    3.0 *1.0 10.5 1.2
    Spiral read
    0.5 1.5 1.5 1.0 1.7 2.0
    1.0 1.5 1.5 1.0 1.7 2.0
    1.5 1.4 *1.4 22.7 1.0 1.0
    2.0 1.0 11.9 * *2.7 *20.1
    3.0 *1.3 8.9 1.0

    A5.4 Compilation System

    Function Apollo Sun WCW ICL
    Fortran 77 Libraries Single Precision
    EXP 1.0 7.4 5.1 7.2
    ALOG 1.0 5.0 4.4 8.6
    SQRT 1.0 1.6 7.7 8.6
    SIN 1.0 1.5 4.4 8.9
    TANH 2.5 5.4 2.5 1.0
    ACOS 1.0 11.1 5.1 2.5
    Average 1.0 4.3 3.9 4.9
    Fortran 77 Libraries Double Precision
    DEXP 1.0 7.0 3.1 9.0
    DLOG 1.0 5.9 2.4 10.1
    DSQRT 1.0 10.5 4.9 71.3
    DSIN 1.0 4.6 2.2 10.5
    DTANH 1.0 14.4 6.8 5.2
    DACOS 1.0 4.8 2.2 3.4
    Average 1.0 7.9 3.6 18.3
    Fortran 77 Libraries Complex
    CEXP 1.0 4.4 2.1 3.7
    CLOG 1.0 10.4 4.6 7.5
    CSQRT 1.0 2.4 1.1 1.4
    CSIN 1.0 4.0 1.9 2.1
    Average 1.0 5.3 2.4 3.7
    Pascal Library
    exp 1.0 19.6 26.6
    ln 1.0 5.4 5.3
    sqt 1.0 10.6 16.6
    sin 1.0 4.8 8.8
    arctan 1.0 3.4 6.0
    Average 1.0 8.8 12.7
    C Library
    exp 1.0 5.4 2.4 7.4
    log 1.0 4.9 2.0 8.0
    sqrt 1.0 7.4 3.4 11.6
    sin 1.0 4.0 1.9 7.6
    tanh 1.0 3.9 1.8 5.4
    acos 1.2 1.0 1.1 1.2
    Average 1.0 4.2 2.0 6.5

    A5.6 Inter Process Communication

    Message size(B) Apollo Sun
    1 1.9 1.0
    8 1.9 1.0
    16 1.6 1.0
    64 1.9 1.0
    128 1.6 1.0
    256 1.3 1.0
    512 1.0 1.0
    1024 2.8 1.0
    2032 1.0

    A6.3 SPY Screen Editor

    Function Sun Perq2
    scroll line by line forwards 3.5 1.0
    scroll backwards 28 lines at a time 1.5 1.0
    screen redraw 2.3 1.0
    search for 60 nulls in 23 lines 1.0 1.5
    quick search for 21800 nulls in file 1.0 2.0
    search for 1140 'a' 1.0 1.3
    quick search for 1140 'a' 1.0 1.9
    replace 1140 'a' by 'A' 1.0 1.4
    delete 4350 [a-z]+ 1.0 1.4
    ⇑ Top of page
    © Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
    Our thanks to UKRI Science and Technology Facilities Council for hosting this site