Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ OverviewEngineering Board VisitCentral SystemFORTRAN AssessmentMuxworthy August 1976User InterfaceUtilitiesOps SysRoberts March 1979Tree-Meta on the ICL 1900Trip June 1976Trip October 1976Trip August 1977Trip August 1978Trip August 1979Trip August 1981Trip August 1981GKS January 1982Trip March 1982Trip March 1983Trip July 1983NATO Dec 1983CSP/Recursion 1984
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDLiteratureReports
ACDLiteratureReports
ACL ACD C&A INF CCD CISD Archives
Further reading

OverviewEngineering Board VisitCentral SystemFORTRAN AssessmentMuxworthy August 1976User InterfaceUtilitiesOps SysRoberts March 1979Tree-Meta on the ICL 1900Trip June 1976Trip October 1976Trip August 1977Trip August 1978Trip August 1979Trip August 1981Trip August 1981GKS January 1982Trip March 1982Trip March 1983Trip July 1983NATO Dec 1983CSP/Recursion 1984

USA Visit 1979

Bob Hopgood (and Geoff Manning)

Bill Walkinshaw was just about to retire. Bob Hopgood was about to take over running the combined Computing Divisions at RAL from September when this trip was made. Geoff Manning and Bob Hopgood were involved in most of the meetings although only Bob attended SIGGRAPH.

1. LIVERMORE

The day was divided into two main parts. The morning consisted mainly of discussion concerning the two CRAY-1 systems while the afternoon was mainly on Network Systems Controllers and mass storage. The main people involved in the discussions were:

1.1 Structure

The Laboratory has about 7067 staff of which 919 have PhD and another 1490 degrees. There are 1589 administrative staff and 3018 technicians. The number employed in computing systems and operations is about 260 with about another 140 in the two applications groups. The Laboratory budget for 1980 is $368M which has major breakdowns of $193M for defence, $84M for energy and $20M for environment research. The Laboratory is about 1 square mile.

There is a Computing and Chemistry Division run by Gus Dorough with Bob Lee under him in charge of computing. The main groups are:

  1. Network Systems (includes operating system development) run by Sam Mendicino - 25 people
  2. User Services - Ranglebett - 50 people
  3. Computer Operations - Vranesh 150 people
  4. Applications Division - Kisui
  5. Applications Division - Bell

User Services group does work on compilers and such like.

1.2 CRAY Systems

The most surprising thing is that both CRAY systems are run as general purpose systems running both batch and interactive work. The LLL CRAY-1 has its own locally produced operating system which was developed on the MFECC CRAY. They had initially decided to go with a system having a number of front end VAX 11/780s for interactive working but this was attacked by their main users who control the money who insisted that they wanted to run interactively on the CRAY. They felt that this was necessary for debugging large production codes. There was no way this could be done on another system.

Mendicino's group is producing another operating system which is capability based and has a message passing structure. It provides restricted domains, etc. This will initially have a Shell around it so that it looks like the existing Livermore operating system.

It came over very clearly that they were much happier with the CRAY than the STAR-100s. It really did provide a lot of power, people were able to gain access to it with little change to their programs initially. Typically they got a performance improvement of 2-3 times 7600 on scalar performance. Analysing some large codes they have got:

No Vectorisation                       3.2 × 7600
Vectorise                              9.9 × 7600
Vectorise + 100 lines of assembler    11.5 × 7600

They have decided not to upgrade the STAR-100s to CYBER 203s.

The MFECC CRAY is similar in configuration to the LLL system. It has a number of off-site users connected via a network with DEC10s at the remote sites. The communications system uses an earlier version of DECNET. Most of the links to remote sites are 50K bits. Unlike LLL, MFECC do not use the NSC bus - they felt it had not quite arrived when they were installing their system. They have an old CDC 6400 which acts as a control computer for staging discs from their CDC mass storage system. Terminals are concentrated using PDP11/50s and a PDP11/40 concentrates the RJE load. They have developed their own connection from mass store to CRAY. The whole file system consisting of 600 Mbyte disks (2 of them), the mass store and a CALCOMP ATL (Automatic Tape Library) will act as a multi-level file store with the user not having to worry about which level his data resides. All his accesses are via file name.

They started initially by using CRAY's own operating system and the 7600 controlled the number of jobs submitted to the CRAY - there was no back up in the CRAY so any work in the CRAY on a break was lost. By October 1978, they had fully saturated the CRAY but only getting 60% of CPU available to user programs. They have since moved to their own operating system which differs in detail from the LLL system. They have now achieved 87% efficiency. Maximum job size is 750K words. When CRAY is working at full capacity, 25% of a 7600 is used up in servicing the CRAY. Typically, the CRAY time shares 50 to 70 jobs at a time with a large proportion of them interactive users.

The total operator cover for the CRAY, 7600, 6400 and PDP11 systems is 10 (2 operators on at a time).

The CRAY will, in the future, be run without the 7600 front end.

1.3 Network Systems Controllers

Livermore were one of the first large users of the Network Systems Corporation bus. Their system currently has the form:

CRAY D A130 F GR 7600 A120 IBM Compatible Discs A510 PDP11 A220 OCTOPORT SEL 3275 A470 A410 11/45 11/45 Terminals A410 11/05 11/34 Terminals back-up path A510 STC Controller 2 x 6250 Tape Drives A540 CHORS High Quality Output A120 CDC Mass Store A327 Disc Drives A510 Calcomp ATL A220 PDP10 Connections to 7600 are to PPUs which stage to disc via another PPU outside the 7600 central system

Livermore Configuration

As can be seen, the system is non-trivial! The OCTOPORT connection allows non-standard computers to be attached as long as they conform to the defined interface.

The PDP11/45 and 11/34 systems provide alternative routes for terminals to access the CRAY. They have had problems with the two PDP11/45s and their NSC controller. The two systems are connected for fail-safe and load sharing. NSC had not envisaged that situation on two interconnected systems connected to the same controller.

They were quite proud of their 7600 connection which allowed staging of files on to 7600 discs without going through main memory. This was a modification of their mass store connection.

The NSC equipment has been extremely reliable. They have had no more than two or three errors in the last 6 months. The major problem is if you keep pulling boards out. Also, the statistics collection software for the NSC equipment is only just about to be delivered. Consequently, they have not managed to get any counts on retries, etc. Once this is achieved, they will start organising preventive maintenance. They have had almost no errors in terms of corrupt data transfers since the system was installed.

They have an assembler for the NSC box which runs under SCOPE. However, they tend to run standard NSC code in the controllers wherever possible. To get a copy, contact Richard Watson (or Donnelly?).

As can be seen from the diagram, they do have a variety of devices connected directly to the NSC bus. IBM compatible discs are accessed by the CRAY and 7600 without rewriting device driver code in the mainframes. This is done using a THIRD PARTY TRANSFER. Each device connected to the bus had a control computer associated with it. The IBM disks have a PDP11. The CALCOMP ATL has a PDP10.

The disc transfer request is sent by the host to the control computer. It sends the relevant channel commands to the device which initiates the transfer back to the host. The end of the transmission signal and any error signals are returned to the control computer. It will then send the appropriate ones to the host.

1 2 3 4 5 HOST CONTROL DEVICE

Livermore Configuration

This seems to be a function that TESDATA is not aware of. Apparently there is no documentation on it yet but it should soon be available.

Cost of controllers in USA is $33,OOO but the A327 is $60K.

The Livermore system is about 700 feet maximum distance apart although the acceptance tests are with rolls of coax to simulate 2000 feet. They have a simulator for their network. They had no information on the Bell optical connections or the slow speed bus. They thought it was at least two years away.

1.4 Mass and Archival Storage

Both LLL and MFECC have CDC 38500 mass storage systems. Both are in the process of buying CALCOMP ATL (Automated Tape Library) systems.

The old IBM photostore will cease to function at the end of this year and they have the problem of finding a new archival media to replace it. Their conclusion is that at this point in time the only option they have is to go back to 6250 magnetic tapes!

The mass store on both systems is transparent to the user who tends not to know whether his files are on disc, mass store or even tape. Archiving is provided by the system automatically, although the user can specify that particular files are available for archiving. Livermore purchased their mass store from CDC mainly because of price - they did not have to have a front end system to run it. Los Alamos, on the other hand, bought from IBM and were forced to purchase a small 360 to control it as that was the only means of providing access to the 7600s. The difference in total price was $4M compared with $2M.

Livermore is quite happy with the mass storage system but do not feel it has much future. It certainly does not help the archival problem. With the passing of the IBM photo store, they have been left with 10 full on-line loads (about 10 × 1012 bits) that need archiving. Luckily only 3.5 × 1012 bits are still pointed at through the file directories and, therefore, need to be kept. Unfortunately, they made it very easy for users to say they still required the information and, consequently, 80% is supposedly still required. Berkeley and Los Alamos insisted that users had to organise the copying themselves and managed to archive only ten and fifty per cent respectively.

Livermore have a tape mounting problem which is probably not as great as ours. They have 40,000 800 bpi tapes and 10,000 6250 bpi. They are impressed by the CALCOMP ATL. Theoretically it will do 100 tape mounts an hour but it is probably not sensible to run it that hard. It consists of two rows of tapes with a robot moving up and down mounting tapes on existing decks. The standard is to use IBM self-loading decks with it.

CALCOMP provide software for the IBM systems which is quite comprehensive. The maximum size of a unit is 8000 tapes (100ft long and 16 sections). A more sensible size might be, say, 2000 tapes. Putting 6000 tapes on one ATL might not give you quick enough access. The system costs about $200,000 and CALCOMP have between twenty and fifty systems installed.

Livermore's view is that the main archival and mass storage in the future will be optical digital disk (they don't like the term video disk because of its analogue connotations). In their view, the systems, in order of availability, will be:

  1. Philips - Magnavox (end 1980, early 1981)
  2. Xerox
  3. RCA - possibly if they get backing
  4. Kodak
  5. Harris
  6. Omex

The Philips system uses a 12in two sided disc with a double sided plastic layer with the information recorded on the inside.

They can store 1010. bits per side at the moment. The system is write once/multiple read. Life testing is still in the early stages and so whether it will last longer than 10 years is still unknown. Aging processes may or may not be applicable (they currently keep dipping them into water). The media used is sturdy plastic. The Philips system works like a standard juke box and will handle 128 or 256 disks. It should be possible to get 4 × 10*12 bits per unit. The recording is via a helical archimedal spiral with a gap of 2 microns between tracks. Philips have a system already built. They hope to have a 2 × 1011 prototype available by 1980, 1013 by 1982 and 1014 - 1015 by 1984.

Details of the Xerox system were given but are confidential at the moment. It has some similarities with the Philips system. EOS Pasadena are to use the Xerox system for a Library of Congress catalogue. Error rates appear to be good if error correcting is provided. Raw error rates are of the order of one in 106 or 107. Polynomial error correcting codes are used. The media used is glass disks. Details of the RCA system are available in the published papers we already have.

The Harris system is being developed in conjunction with the George C Marshall Space Centre. It consists of microfiche stored 512 per tray. The media is likely to be some exotic sandwich which is made permanent by chemical fixation. It is likely that it will need a reasonably large amount of disc buffering in the recording process as the time between recording and when it is available for use is in the order of days as it is a chemical process. Four trays should hold about 1013 bits.

Omex use a 4in square plate mounted in slide trays. The Company's main designer is Les Burns (ex IBM photo store). Capacity is at the most 2 × 109 bits per slide. Technology is slightly different in that they use a diode laser whereas Xerox/Philips use a gas laser. A 5ft × 4ft system could hold 1012 bits.

At this point in time, Livermore see no alternative to tapes. Archival properties are good as long as they are kept away from the computer room. They need forty per cent relative humidity and 70 degree F. Tape needs to be wound with constant tension rather than constant torque. It needs to be replaced back in the computer room for at least a day before being used.

1.5 Printers

Livermore have two Honeywell Page Print Systems (PPS) and a colour Xerox printer as well as several colour Xerox copiers. The PPS system is used for fast computer output. It produces three pages per second. The process uses special paper and works out at about 1.5 cents per page. The system cost about $125,000 and Livermore produce three million pages per month. Their view is that the system should not be pushed harder than this or it is likely to have maintenance problems. The system cuts output, punches holes in it, etc. and places it in a number of collating trays.

Livermore's output per year is enormous.

Type Millions of pages Output
PPS 36
FR80s 100 Fiche
FR80s 10 16mm/35mm film
FR80s 6 Colour film
FR80s 0.1 High Quality Text
DICOMED 10 High Quality Text
RJE 10

An interesting digression was that Donald Knuth has been working on a TEK (Ti Epsilom Kappa) text page production system. It is supposed to be very good. Knuth will send you details and updates of the system for a $7 fee. Initially, it was written in SAIL and is being recoded in PASCAL.

The Xerox colour system was an early development version of the production system that is now available. They have had it about two years. The interface to the system needs to be given to it as RGB signals. Livermore have a baroque access method using the output from their standard TMDS raster display system to put into a Versatek 210 (modified) interface box to get the scan lines one at a time and from there through a buffer to the device. They have timing problems and it seems a real lash up. The page size is theoretically 6.4in × 13.75in but Livermore restrict it to A4 (100 dots per inch) and only about half of this area can be written too due to the interface problems.

The production systems sound as though they may be significantly better. Interfaces are available to Tektronix and Ramtek displays. Bell are working on a PDP11 interface.

The system costs about $25,000. Copies are produced at 30 secs for the first and 20 secs for subsequent ones. Interestingly, Xerox sell it but do not lease it.

DUNN engineering are working on a cheaper colour copier ($15,000 to $20,000) using a colour polaroid camera and standard Tektronix display.

1.6 Minor Items

The following papers etc were obtained:

Livermore had heard a rumour that Seymour Cray was moving from Minneapolis to Colorado and CRAY-2 would be built there. Also, that they would be designing their own chips from scratch for the new system.

In terms of large scale processing, Livermore felt that there was still a lot of mileage in conventional processors. We were nowhere near speed of light problems yet. The IBM Josephson Junction machine could give at least two orders of magnitude increase in speed.

Livermore were likely to go to satellite transmission from remote sites because it was likely to be much better in terms of error rates than leased lines and also less expensive. They tended to have significant faults on leased lines which varied between one a month and one a year. The next CRAY user group meeting was to be held in the UK and several people from Livermore including San Medicino were likely to go.

2. IBM SAN JOSE AND SANTA TERESA

The morning was spent at San Jose talking to Dan McInnis. San Jose is part of the Data Processing Group. It is responsible for Storage Products developments and the 3800 laser printer. The other parts of the Data Processing Group include Poughkeepsie (303X development), Endicott (4000 series) and Rayleigh. Manufacturing of discs was done at San Jose, while tape manufacture was at Tucson (a new factory). Santa Teresa is purely used for program development. There are about two thousand people in Storage Products developments and nine hundred at Santa Teresa.

The standard development path for any product is:

  1. Time Zero when new product is named etc.
  2. Phase 1 Review. By this stage, technology for the new development should be known. The schedule between then and the product announcement should be defined. There should be a target estimate for sales.
  3. Phase 2 Review. By this stage, some systems should have already been built. A firm price should have been defined.
  4. Phase 3 Review. The system should have been tested. Depending on the product, three or four (or in the case of discs three hundred or four hundred) systems will have been sent out and tested in a variety of environments.
  5. Announce
  6. First Customer Product

At any of the Review dates, it is possible that the product will be stopped. At the Phase 3 Review stage, the system may be held before announcing while it waits for other products. The average time from start to announcement is five years for discs but 3800 and 3850 were nearer ten years. Managers would change several times during a ten-year life. For an update of an existing product (double density 3330), the timescale might be two years.

The breakdown of times in each period are:

The long time to Phase 3 in this is the point where major errors are likely to be discovered.

Out of projects started, somewhere between five and nine projects are started for each one completed. There are about thirty products in the disc division at a time. Products are normally abandoned in Phase 3 for business reasons and Phase 1 for technology reasons. The major changes in disc technology that can improve the product are the media, head to surface distance, closeness of tracks.

2.1 Storage

The trends in disc production are given in the next table. The latest device is the 3370. This uses Winchester technology for the heads. The surface has silicon added to allow a smoother landing when the heads settle on the disc. There should be no head crashes with this technology. It uses a much lighter read arm with an amplifier on the end of the arm to amplify the small signals. It has two read heads and head selection is done in the amplifier. For other disc systems, the winding of coils in the head is still done by hand. The 3370 is the first system where this is done automatically. This has meant that head manufacture is a totally new process which has caused considerable changes at San Jose. The amplifier is a single special purpose chip.

It is obvious that IBM have a double density 3350 somewhere in the design cycle. It is very unlikely that it will be announced. The trend suggests that the next announcement will be a 1200 Mbyte drive at about 1/3 or 1/4 of the 3350 price per Mbyte.

Disc technology is likely to continue with fixed rather than demountable systems. The technology could get to 8000 Mbyte drives and prices down by another factor of four. However, other technologies may take over before this is achieved.

Product Comparisons
Product Date Capacity (MByte) Bits/sq cm Access Data (Kilobyte/sec) Rotation (millisec) $/MByte
RAMAC 1956 5 0.31 600 10 50 153
1311 1962 2.7 8 150 78 40 165
2311 1964 7.25 17 75 156 25 75
2314 1965 29 34 60 312 25 23
3330-1 1970 100 120 30 806 16.7 7
3330-11 1973 200 232 30 806 16.7 4.65
3340 1973 70 263 25 885 20.2 6.75
3350 1975 317 465 25 1198 16.7 2.50
3370 1979 571 930 20 1859 20.2 1.18

There are unlikely to be changes in speed, it wobbles if you rotate it any faster, or in the number of heads (another set of heads is difficult to register and causes perturbations in the air flow).

Other technologies likely to come into use are charge coupled devices, bubbles, RAM. The first two of these are similar in price. So far, there has been trouble in yields with charge coupled devices. RAM may use Josephson Junctions. Even 64K bit chips are down to $600/Mbyte/month. Other technologies which are slightly more distant are optical discs, electron beam and holography. IBM implied that they had no optical disc storage technologies in the research and development cycle at San Jose.

Looking at the price of storage:

The new devices have a long way to go to get down to 3370 prices. It is possible that smaller size discs (physical dimensions) called PICCOLO technology will be used where the customer does not have an initial need for a large storage requirement. The development in the 3850 mass storage seems mainly in the area of software development. There could be a doubling of storage capacity but this seems unlikely in the near future. IBM want to make the 3850 more attractive first by increasing the number of places it could be used and therefore, making it easier for the customer to justify the purchase or rental at $15,000 per month.

The HSM (Hierarchical Storage Manager) will be developed to provide automatic backing up of discs onto MSS. For example, system manager defines times of day and system does it automatically.

IBM see the future as with Fixed Block Architecture as is used on 3370. They feel that BDAM should not be used in the future.

2.2 Printers

IBM are involved with two non-impact technologies, the 3800 laser printer and ink jet. The latter tends to be low cost and low speed and not very high quality at the moment. The 3800 laser printer gives high speed, high quality output. It uses fanfold paper. I have some examples of output much better than the copies that Chris Osland has.

Trend in the future will be to cut sheet output and double sided copying. Xerox have announced a printer already.

2.3 Santa Teresa

This visit in the afternoon was of little value. The Santa Teresa Laboratory is responsible for developing a number of software products including IMS, VSPC, APL, GIS and BASIC.

The major interest is the building which was specially designed for program development that includes office furniture etc. I have a brochure on it.

The two people we saw, talked about APL and DL/I, the description language for IMS. There was little of interest to the Laboratory. In our view, the people involved were not that competent.

Geoff Manning at Santa Teresa, August 1979

Geoff Manning at Santa Teresa, August 1979
Full image ⇗
© UKRI Science and Technology Facilities Council

3. CDC

The visit to CDC lasted one and a half days. The major areas looked at were the two computer ranges, CYBER 170 and CYBER 203. We also looked at disc, mass storage and bus sub systems.

3.1 Corporate Overview - J E Herman

The Company now employs fifty-one thousand people. There are a number of components that make up Control Data and the large systems side is a small part of the whole. Peripherals account for over 1/3 of the Company's profits.

There is a large Commercial Credit branch which came about by a merger in 1968. This makes loans to private individuals and large companies. Currently, CDC own 2300 aircraft and 40,000 cars. It has 800 consumer offices in the USA, 24 in the UK. Trading is done under the name First Fortune. It has 800,000 insurance policies out.

The peripherals side is a joint company with Honeywell. They have the major holding with about thirty per cent owned by Honeywell. The peripherals side supplies one third of the world's discs. There are fifteen hundred companies which buy peripherals from CDC.

3.2 Super Computer Program Status - L M Thorndyke

The CYBER 203 was announced in January and is the current largest CDC machine. The company intends to continue to market two ranges of computer, the 170 and 200. To some extent the 170 will be used as front ends to the 200 series. The connection will be via CDC's high speed bus. By defining CYBER 203 as a CDC range, it will eventually have the standard NOS operating system interface to the user. CDC do not see CYBER 203 as a general purpose machine and always expect it to be front-ended by a conventional CYBER via the bus.

The CYBER 203 has LSI scalar orders and vectors similar in speed to the STAR-l00. There will be three models:

The CYBER 205 which has LSI vectors (about twice the current speed) is less than a year from announcement. It will have channel speeds starting at 6 Mbits/sec up to 80 Mbits/sec with one high speed 200 Mbits/sec channel. There will be support for foreign front-ends including IBM. The CYBER 205 will come in a two pipe and four pipe version, the second effectively doubles the speed of vector processing.

The CYBER 203 was put back a year by the inability of Fairchild to produce a good enough yield initially on the chips. The machine is made up of a fifteen layer Teflon board and the number of chip types is being reduced and will be down to twenty-five in CYBER 205. A CYBER 205 should be available for benchmarking around the second quarter of 1980. The CYBER 205 will have a 4K memory chip in place of the current 1K bit chips. This will allow the memory size to go up to 8M words.

There are currently two CYBER 203s in checkout and two more nearly built. The original CYBER 203 prototype is having the vector part stripped out to make it the prototype CYBER 205.

CDC have two orders for the CYBER 203 from the Navy and Air Force. The cost of a 0.5M word CYBER 203 is $5.8M and the 2M word machine is $11.8M (60% of the cost is in the memory).

3.3 CYBER 170 Developments - M J Mykkanen

The CYBER 170 range was introduced in 1974. The CYBER 175 is similar to the 7600 in architecture with a slightly faster minor cycle time (25 ms compared with 27.5) but its I/O is down by a factor of four. The CYBER 176 was introduced in 1977 and is very similar to the 7600 but slightly more powerful. It has a 200 nanosecond semiconductor memory and can have a ECM data storage. The architecture differs from the 7600 in as far as the basic machine has a single level memory and the ECM cannot have instruction executed in it. Consequently, programs making use of LCM features may need changing.

The systems use CDC 885 600-Mbyte discs. FORTRAN 77 should be available next month. Low level X25 protocols will not be released until mid-1980.

3.4 Lunch - L Thorndyke

Thorndyke is currently in charge of CYBER 200 development but was previously head of Peripherals Division. Over lunch we talked about the future of disc and tape storage. The impression was that discs could probably be stretched to 8000 Mbytes per spindle. Tapes might go to double the current density but are unlikely to go further. For archival dumping, they are more likely to go to new methods of writing. In particular, continuous writing with no gaps has been used effectively by CDC in special applications. He felt that Philips and Thomson were ahead in the race to produce a marketable read/write optical digital disc.

3.5 CYBER 200 Hardware/Software

This was presented by Neil Lincoln and Chuck Purcell.

CDC had been unenthusiastic at continuing the STAR-100 project until CRAY vindicated the use of a vector architecture. The company were now firmly committed to the CYBER 203. This has the great benefit that the 203 team can ask for and get resources from the CYBER 170 people for compatibility developments.

The CYBER 203 has an almost identical order code to STAR-100. The benchmark performances give:

      DO 10 I=1,50
      A(I) = I
 10   C(I) = C(I) + A(I) * C(I+1)

This loop will run at 10 MFlops.

      DO 20 I=1,100
 20   C(I) = A(I) + B(I)

will run at 20 MFlops.

Both can be done in parallel to yield 30 Mflops. This is a benchmark for a current tender.

The hardware consists of LSI for the scalar processing. It is made of ECL with 168 gates per chip. Speed is 20 nanosecs compared with the STAR-100 40 nanosecs. The new memory is a 1K bit chip, the same as the CYBER 176. The chip is not soldered into position but is pressed in with a clip. Almost one amp of current passes through the chip.

The whole of CYBER 203 has been simulated down to the gate level using two 7600s. High performance swapping discs are connected directly in using the 200 Mbit/sec channel.

The SECDED (Single Error Correction Double Error Detection) is done in several places from the memory read out to when it reaches the vector pipe so that the point of error can be localised. The CYBER 203 should run at about 8 Mflops, twice the 7600 on scalar operations. In a highly vectorised code it will be about 25% better than STAR-100. The CYBER 205 will start at about 2.5 × 7600 for non-vector operation and, even for short vectors, have a speed far in excess of the STAR-100. Some approximate comparisons:

                    Scalar           Vector
STAR-100              0.3               3
CYBER 203             2                 3 
2-PIPE CYBER 205      2                18
4-PIPE CYBER 205      2                30

Typical CYBER 205 times will be:

Vector Start Up       3.5     × 203 
ADD                   2 or 4  × 203 
MPY                   4 or 8  × 203 
Scatter/Gather       36       × 203 
SELECT Operations    14 or 28 × 203

The two figures are for the two or four pipe version.

The CYBER 203/5 can have a 16M word backing store having a 1280M bit/sec transfer rate with 3.2 microsec access time.

The Ames machine, a specially configured CYBER 200, performs at 1.04 billion floating point operations per second.

3.6 Future Technology - A A Vacca

For high performance circuits, CDC are using Field Programmable Logic Arrays. The register file currently is eight to ten nanosecs and will go down to two to three nanosecs for CYBER 205. They hope eventually to get to fifteen hundred gates per chip. This would have ninety pins, four hundred picosec delays, less than eight watts of power consumption and be made of ECL. The prototype will be available by 1982.

The aim in terms of high density chips will be three thousand to six thousand gates with one hundred and sixty pins, dissipation less than two watts and two to five nanosec delay. Probably I2L or CMOS and a prototype should be available by 1983.

CDC have decided not to look at Josephson Junctions. Instead, they are going for liquid nitrogen at 77 degree Kelvin. CMOS should increase its performance by a factor of two at that temperature. They feel that this is a more realistic approach in the next few years.

The aim is to produce a CYBER 173 with 1/2M words of memory in a 12in square box with a heat dissipation of one hundred and fifty watts.

In the longer term, CDC are looking at the possibility of going to gallium arsonide in preference to silicon.

3.7 Loosely Coupled Network - C Vigas

This was a disappointing period. Vigas had as much idea of the product as we did. We have some viewgraphs. It bares a close resemblance to the NSC bus. Major differences are:

  1. It can run 50 Mbits up to a mile with only two points. Thirty-two taps goes down to one thousand.
  2. At least one CDC host is required for down-line loading and diagnostics.
  3. Controllers cost $49,000 and have T connections to bus which can be easily added and removed. It should be announced as a product in the fourth quarter of this year.
  4. Memory size can be up to 128K bytes (useful if you have a microwave link in the system).
  5. The only non-CDC processors that can be linked are IBM and DEC.
  6. It cannot currently support a CDC processor running SCOPE.
  7. The processor is microprogrammable.
  8. The system always runs synchronously. The first processor to notice that no data has been sent for three cycles sends a resynch message. Not an ETHER really.

3.8 CYBER 203/5 Tour

We saw two CYBER 203s on the floor and the prototype CYBER 205. They have typical CDC wiring in the back.

3.9 Disc Products - C G Gust

We went to the Normandale facility which is one of four in the USA. The other three are at Aberdeen, Rapid Waters and Redwood Falls. These act as feeder plants to Normandale where production and R&D are done. The aim is to turn Normandale into a pure R&D plant. It has about eight hundred and eighty thousand square feet and six thousand people are employed. Manufacturing is also done in Portugal and Germany.

CDC claim that they were the first manufacturer to deliver 100 Mbyte, 200 Mbyte and 600 Mbyte drives. They announced the 3350, 600 Mbyte compatible drive in 1977.

CDC wire all heads by hand on the Winchester technology. Their heads are six times lighter than their competitors. Consequently, they can fly at nineteen microns compared with twenty-five microns by their competitors. The head is set at an angle:

markerWidth="6" markerHeight="6" orient="auto"> IBM 25 CDC 19

CDC Disc Heads

which is much better when any particles are encountered. The CDC heads are all underneath each other while 3350s had to be staggered due to cross talk. Consequently, it takes forty-five minutes to swap a 3350 pack.

The 38302 controller supports all CDC and IBM discs apart from 3340. It can mix 100, 200, 300 Mbyte.

They recently surveyed 1224 sites and found

2314           9 packs per drive
3330-1       1.7 packs per drive
3330-11      1.3 packs per drive

The CDC 33502 drive consists of two 600 Mbyte spindles in a single cabinet where each drive can be configured as one 300, 2 × 300 or 2 × 200. Speed is nineteen millisecs compared with the IBM twenty-five millisecs. It is the fastest drive on the market. The cost of a 33502 A2 drive is forty-two thousand pounds - about ten thousand pounds per 300 Mbyte drive. They have installed two thousand systems and there was ample evidence on the floor that delivery could be made at short notice.

They have a 1200 Mbyte drive running in the Laboratory.

CDC do not have a solid state version of the 2305. They had intended doing one made out of CCDs. They have had these on order from two manufacturers since October 1978, but none have been delivered yet. STC who have announced a disc, are getting their CCDs from the same manufacturer.

3.10 Mass Storage System - R L Hersch

We saw the in-house system working and it looks well engineered. They have or will have sixty systems in the field by the end of 1979 on forty different sites. They have two in Germany, two in Paris and one in Scandinavia. In particular, they have a system at Union Carbyde, Oak Ridge Tennessee, running on 2 × 360/195 using OS/MVT and HASP. The only changes needed are to have the 370 compatibility software installed.

The CDC software SMS provides automatic migration to and from tape. The size of the cartridge is based on their assessment of the market. That is dataset sizes fall in the following ranges:

0-2  Mbyte     70%
2-16 Mbytes    25% 
Over 16 Mbytes  5%

The 8 Mbyte cartridge size shows that 95% could be contained on at most two cartridges. In their analysis of RL datasets, they think we fit this pattern.

The access times are such that up to 32 Mbytes, it is faster to access via cartridges. The only progression they see is to double the density and have 16 Mbyte cartridges. The length of tape is such that all the cartridge stays in the vacuum chambers. There is no actual tape winding.

The tape drive is identical to their standard drive apart from using wider tape. It is possible to read the tape in both directions which speeds access. To reach the furthest tape with the robot at the worst position takes about 4.5 seconds. The control unit on the MSS could also control other CDC drives. If another CDC controller for drives is also available, it is possible to switch them to give back-up.

The VDAM software is described in the manuals we have. The SMS software, in the first release, will be available this month.

All data paths to and from MSS go through the main memory and that can be anyone of the connected hosts - the one designated the control processor.

CDC have analysed our workload and saw 18648 tape loads in 9 days. That seems unlikely unless it has escalated recently. A job making four uses of a tape in four separate steps may look like four tape mounts. However, assuming an MSS was installed, there appeared to be 4935 tape mounts left, which is none too healthy. I have a complete write-up of assessment as should Bob Taylor and Alan Mayhook. The machine looks a useful product. It appears to work with our current software and could solve our disc/tape mount problem if only partially. It needs to be looked at with some level of detail.

4. SIGGRAPH

The conference has expanded a great deal over the last two years and now has two thousand or more delegates. The main benefit of this is that the trade show has virtually all the vendors at it.

Alan Francis and I (Bob Hopgood) spent a considerable amount of time at the Raster Display stands getting more information on the Tenders we had received. It became evident that most of the UK agents have no idea what they are selling. Two examples:

Grinnell can handle 512 × 512 × 16 planes and can provide access to either the top or bottom eight bits for the colour look up table. Consequently they cannot be ruled out.

Genisco can only do the equivalent by special hardware which will cost us $10,000 in development. Also, the UK agent had completely misunderstood the requirement and so the number of boards required is significantly more than was estimated, resulting in a chassis which puts up the price by another one thousand pounds per display. As a result of our visit, both companies are clarifying their proposals.

4.1 PERQ

The Three Rivers stand was surrounded by people almost throughout and it was difficult to get any real time with the designer who is about the only person who could answer detailed questions. It looked significantly better than the TERAK. The display is excellent. Unfortunately, they had managed to get the hardware to the show, but very little software. Consequently, the same display tended to be shown all the time.

The system comes with a V24 interface and they could let us have a PASCAL handler for making it look like a teletype. They hope to provide X25 etc. in the future.

They are negotiating with GE Instrumentation and Service to maintain it. If we want to, they will fly a person out to install it. However, they expect that systems should be able to be installed by the customer.

The PASCAL is similar to UCSD including UCSD strings. It allows dynamic arrays to be passed to procedures as long as you pass the bounds in as well. It splits a PASCAL program into segments of 64K bytes. There can be up to sixty-five thousand segments. Most static variables get put onto a stack which is a maximum of 64K words long. Thus the sum of all static variables must be less than that. However, using pointer variables, arrays of 64K words are available.

One's own fonts can be generated and the system is likely to come with about eight or ten.

The Operating System is UNIX-like, with pipes. Parallel primitives are fork and join. The PASCAL allows recursion and you can have a number of processes running in parallel.

There is a need for a dump facility as the interface to the main host and associated software will not be available in our timescales. A cartridge, non-standard, would cost about $4000 and would dump the whole disc. A Shugart floppy (double sided, double density) would cost about $1500 but several are required to back up a disc. We need to decide which we intend to have by October. Delivery will be in November. We should be getting a set of manuals in the near future.

The tablet is part of the basic price of the system. Using a finger, it is possible to resolve a pixel, just.

All the software is written in PASCAL and there is no assembler. The system has no better clock than the 17 millisecs line frequency.

There are a few simple draw line, polygon fill primitives in the window package which allow multiple overlapping windows on the display.

Double length and single length integers are supported. The system comes in four parts - display, keyboard, electronics box and tablet. We need to get a table to put it on.

The display looks very good indeed. They seem to have a large number of orders and it generated a great deal of interest.

4.2 Standards

GSPC presented their latest standard proposal which has made a significant attempt to come together with the German DIN proposal. I have a copy of the new proposals. GSPC, as a committee, is likely to be terminated later this year. A second committee may be set up to look at outstanding issues. The GSPC will have its work taken over by ANSI under Peter Bono.

The following changes are the ones that I remember the most:

4.3 Conference Papers

The conference is really getting too big. It was always difficult to get a seat even though rooms were large.

Most of the developments were in terms of greater and even greater realism. Transparent devices have refraction and glass thickness taken into account. The computer output is getting more realistic than the original.

Most of the emphasis has swung towards raster graphics. There are many papers on anti-aliasing for lines, curves, coloured regions etc.

In general, the papers on high-level problems such as system design were naive and out of date. Unfortunately, the audience, in general, felt that the papers were original! The quality of graphics, in terms of design, is much worse than Europe. In terms of techniques, it is much better - mainly due to the infinite amount of computing time available. The few papers presented by the UK contingent were well received.

Tom Moran had a good presentation on User Models which was the highlight of the papers for me. There should be some good results from Xerox in this area by next year. Tektronix exhibited their ignorance of anything approaching a user model which typifies most of their hardware and software systems.

4.4 Films

Films, as always, played a large part of the programme. There was hardly a session that did not include animation, either video or film. Much the same goes here as with the presentations. Technically brilliant, but often poor animation.

The Voyager II trip to Saturn was shown in simulations taken from the data actually returned from the spacecraft. Minutes of 360/195 time per frame was common.

III showed some beautiful sequences including a great deal of STAR WARS clips. Technically brilliant, but often taking up to a week to plot a sequence.

New York had a lot of animation including some new films. Technically and in content they are at least as good as Walt Disney. Really beautiful! Ed Catmull has left NYU which may have some effect. There are also rumours that the backers are getting discontented at the lack of any signs of a possible profit.

The Finite Element film was well received. The Computer Animation clips from ALIEN done on the FR80 also came over quite well. The best animation in terms of ingenuity and general appeal is still in the new OU maths programmes. They do very well on a low budget.

Publications brought back:

The Conference was well worthwhile. It is a pity that it is getting so large. The trade show is almost certainly the best in the world in the Graphics area including NCC.

5. IBM

Over three days were spent at IBM on the East Coast. Most of the time was spent at a large Systems Seminar for Technical DP Managers together with visits to East Fishkill and Yorktown Heights.

Much of the seminar was of little use as far as getting information from IBM concerning future systems and software. However, it was of value to me in terms of getting to know IBM jargon, history, etc. The course speakers were quite variable, ranging from very good to total incompetence. Rather than go through it in detail, I will just summarise those parts which appeared to be of most interest to me.

Sam Pattow - Large Systems Design Evolution

The speaker invited the audience to suggest areas to be discussed. Almost unanimously, the response was future trends. He then spent the hour filibustering to avoid answering!

He pointed out the standard speed of light problems (8-9 ins = 1 nanosec, 6 feet of wire in a chip) and the efforts made in the 3000 series to overcome this. He saw the 3031 as a cost effective machine and the 3033 as the performance machine with the 3032 somewhere in the middle.

The usual statistics for the machines were given:

3033 3032 3031 CYBER 205
Cycle time (nsecs) 57 80 115 20
Speed × 370/158 4.8 2.7 1 8-120
Max Mbytes 16 8 8 64
Power (kva) 61 57.5 21 200?
Space (m 2) 62 55 40 39
Cooling Air/Water Air/Water Air Water/Air
Store-Buffer-store (bytes) 64 32
Interleaving 8 4 4
High Speed Buffer (Kbyes) 64 32 32
Director Channel speed (Mbytes/sec) 6.7
Channel speed (Mbytes/sec) 1.5
MIPS 5

I have added some CYBER 205 figures for comparison. The look ahead on the 3033 was described in some detail, (the three instruction buffers allowing the two first paths all to be decoded). The choice of second most likely is defined by the branch instruction obeyed. They have looked at how certain instructions have been used in the past and extrapolated from there. My own view is that that is a dangerous practice. Changes in language often cause instructions to be used in quite different ways.

The decision to cause all writes to cache to also write to store was to aim for higher system integrity. It also allows the system to run without a high speed buffer which might be useful in diagnostic environments. Reads from backing store always go straight to store.

Vic Smith - Channel Directors

A description of the Director philosophy was given. The decision to microcode the Directors was almost certainly done for flexibility. The point came over here and later, that more and more of the channel operations would be moved to the Director from the central system. It has the power of a 370/158.

The main point made was that the BYTE MPX used most of the cycles in the microcode and ultimately limited its performance. The estimate was 8% for the BLOCK MPX.

Priority should be organised so that the highest data rate device is on Priority 1, critical command claim next, etc. They recommended that the aggregate data rate should be less than 6.7 Mbytes/sec. After that one becomes overrun on DASD. CCW-channel decoding runs out of microcode cycles. The 3350 accumulates a count of overruns. A certain amount of overruns should be expected.

The aim should be to:

  1. Spread I/O code across Directors
  2. Spread load across channels (2305 on 1, 3350 on 2 etc.)

He made the point that the current Director and channel could not support faster discs than the 3350.

Pat Rousel - 3036 Console

It was clear that IBM were proud of the console activities including the remote support facility (although they had had some resistance from their maintenance personnel on site). They had a data bank of previously logged faults in Boulder. It was unclear whether this was available to users.

A point that was made is that sites with non-IBM equipment on their machines had organised SWAP teams to analyse console output before going to IBM with problems.

The remote service facility used a 1200 baud line with error handling on the line and it could go to smaller block sizes if necessary to avoid errors.

Each 3000 has to have separate console stations so large systems tended to proliferate them. One advantage of MPs!

John Eclert - Microcode

The main point made was that going to a microcoded machine did not imply loss of performance or lower cost. The cost/performance of microcoded and non-microcoded comes out much the same, i.e. the performance is improved by adding buffering etc. and the price increases.

Microcode was used on all 3000 processors. The 3031 used a 72 bit word split into seventeen fields with each field defining one operation to be performed in that cycle. The 3032 had 106 bits and the fields went up to thirty-one by the time the 3033 was reached.

It was clear that further assists such as APL assist and VM assist would be coming out. Eventually, large parts of the machine operating system would be in microcode. The message is - don't mess about with VM, CP, etc.

The other point made was that the 370 order code would not last much longer, 31-bit addressing needed to be made available in a more general way. It was clear that the 370 order code was less of a fire wall than the operating system interface.

Rich Partridge - AP/MP Overview

It is clear that multi-processor systems are the way IBM and customers are going. The advantages seen are reconfiguration, simplification, cost/performance, performance increase, load levelling, saving money on licensed programs, etc.

The point was made that AP means Asymetric as well as Attached Processor. It was clear that the AP/MP configurations on the 370/168 were seen to be less pure than they might have been and that a more elegant solution would be available with the 3033, the AP configuration being much closer to the MP one.

STORE CHANNELS INST COMM INST

AP Configuration

The dotted parts indicate the additions to an AP 3035 to turn it into a MP configuration.

Additional improvements over 168 MP are that each processor has its own cache but the caches are linked so that each knows the other's update status. Page Table updates are also passed from one processor to the other.

Nierenberg - Component Technology Trends

Most of the Technology production is located at East Fishkill as far as chips are concerned. It employs eight thousand people and produces gas display panels as well as chips.

There are two main technologies called Dutchess and Purdue. The Dutchess technology is used in the 3033 while the more advanced Purdue is used in the 4300.

During the trip made to East Fishkill and the plant tour at Poughkeepsie, we saw the production and integration of the Dutchess chips into the 303X processors. We also saw some of the production of the Purdue chips.

The Dutchess technology consists of about forty circuits per chip which is mounted on a ceramic substrate with an aluminium shield placed over the complete part.

At Fishkill, we saw the silicon crystals being produced in house (approximately six units with four crystals being grown per unit. The whole process takes about eight hours per crystal). There seemed to be a reasonable queue of crystals to be produced plus some specials being done for R & D Department.

The chips are placed in position on the ceramic substrate automatically. The machine used for taking a completely randomly orientated jumble of chips and feeding them the right way up and in the right orientation, was fascinating. It works on mechanical principles. Chips are placed on ceramic and fused in an automatic machine. There were at least four of these. Only one in four seemed to be working with any degree of reliability. Complete units then had a layer of quartz sputtered on top and mounted in an aluminium case. Both machines were automated. Covers were orientated using the same principle as chip orientation.

The wiring of boards into the 3033 was semi-automatic. The machine shone light where the next wire had to be positioned. The wires looked as though they had all been made up automatically. There did not seem any reason why complete wiring could not have been done by machine. It had the X-Y accuracy and intelligence to do it.

The modern Purdue technology is obviously the way IBM intends to proceed in the future. It is a very modular process. Components on a chip are connected to form circuits which are then interconnected at the chip level. Additional circuit connections are made at the logic module level and at the card level. The logic chip is 4.5 mm square and contains seven thousand components (resistors, transistors, etc.) producing seven hundred and four logic circuits. Circuit speed is about three nanosecs.

Details of the exact production of the Purdue technology got confusing at times, with people often telling conflicting stories. This is my understanding.

The logic chip contains three layers of interconnection wiring on the chip itself (about seven feet of wire). The chip is started by making a single silicon wafer containing one hundred logic chip areas all with the same design. This stage is done using standard optical methods.

These are then personalised using EBL facilities. The wafers are fed in special chambers to the E Beam machines which apply the top three layers. There are three E Beam machines on the production side. The resolution used is two microns and they take about six minutes per wafer per layer. Production of wafers is about fifty a day. The EBL machines are shaped beam systems. The shape beam being various types of ellipses.

At the E Beam stage, it is possible to define different circuits on each chip. Thus small volume productions are done with several types on one wafer.

The chips are mounted on ceramic substrates which are all produced internally at IBM. There are various sizes of substrate (35mm and 50mm seem to be the most common). The ceramic substrate contains twenty-three layers of wiring for interconnections within the ceramic. There are about fifteen feet of wire in the ceramic.

Up to nine chips are mounted on a single ceramic substrate, although often only six are used. Wiring between the chips is achieved by the wiring in the ceramic. The module is completed by having a fancy gold coloured plate on top of it.

The chips are connected to the ceramic by a solder ball technology. This is a forgiving process. The solder balls on melting, pull the chips back into line. (C 4 Technology - Controlled Collapse Chip C...) It reduces wire lengths and cuts the number of connections in half.

The 50mm module has 19 × 19 (361) pins, of which about one hundred and twenty-one are used for I/O. The pins make the connections between ceramic layers. The modules are mounted 24 per card and less than ten cards are used in a 4300 processor. The module costs about $1000.

The Purdue technology looks very good and is the way IBM intend to go, one would expect, in future systems.

It was mentioned that Engineering changes would be done using lasers to delete lines.

Dick Eutler - MVS Future Directions

To a large extent, this was irrelevant to RL. However, it is interesting in that it is clear that there is a body of opinions in IBM that says MVS is the operating system for large 3000 processors now and in the future.

Future directions were all based on the premises that:

  1. The only sensible hard interface is at the operating system level. Users getting below that are likely to be in trouble.
  2. More functions are likely to be moved into the microcode - instruction changes to the 370 order code are likely to allow larger address spaces.
  3. The possibility of getting all of MVS in microcode is absurd, despite what the trade press says.
  4. Most of MVS breaks were software. They have recently done a thorough analysis Of all the crashes from selected sites for three months. Interestingly, the local CEs solution to the bug was wrong thirty per cent of the time.
  5. They would like to run VM under MVS and that is how they see the future of VM.

Ed Mathews - Trends in Systems

This was easily the best presentation and gave some ideas towards what might be in the future. He made the point that IBM had seventeen thousand people in R & D with two thousand doing pure research, the total budget being $1.1 billion for R & D. He made it clear that IBM had a two-year operating plan to 1981 that he was not allowed to talk about. He could talk about ideas that might appear in the next five-year strategic plan.

His general thesis was that there were two technologies, Newtonian and Quantum. The first was large and slow while the second was small and fast. The manufacturers who survive the next twenty years are the men who keep Newtonian out of the Quantum data path. Effectively, tape and disc must be removed from the Quantum data path.

He made the standard point that halving switching speed did not halve the distance between parts necessarily and so did not halve the speed. The parts distance must be decreased at the same rate as the switching speeds.

He also made the point that Josephson Junction systems could get the switching speed down. These would perhaps be used in communications satellites to avoid the need for cooling. Data base retrieval with data base in satellite. 20 Mbyte memories could be constructed which generated only 10-16 joules of heat per switch.

There was every likelihood that 64K bit chips with one hundred per cent yield could be fabricated out of bipolar technology. (Actually 96K on chip and system organises itself to use a good 64K) including self checking and replacement of failed events.

All non-cryogenic machines would be water cooled by the 1980s. This came out time and again. IBM see this and AP/MP systems as the major way of attacking the plug-compatible companies.

He made the general point that the current packaging of systems into processor, memory and storage would go by the 1990s. Moving data to and from processors was absurd. He saw chip design eventually going to a position where memory, bubble storage, procedures and program would all reside on a single chip. The procedures would be etched in the silicon (square roots etc.) while the program would be writable. IBM have a 500 Mbit/square inch bubble system in research.

He sees a 10-15 MIP processor in a two metre cube.

Derringer - JES2/JES3

Again, of little interest to us although of high interest to the attendees in general.

Rodges - VTAM

The only interest here was the speaker who sounded like a truck driver using CB jargon. A typical sentence:

If you DP smarts want to get cost benes on Apples then you need to scrunch down by a couple of megs.

A most hilarious presentation.

One piece of interest was that he mentioned a Simulation System for simulating terminal running called TPNS. He implied, I think, that it was generally available. It could be of use to see how many CMS users we can stand.

Tuller - Complexity/Change Management

An interesting if rather verbose talk on various attitudes to change management.

5.1 Poughkeepsie Summary

In getting information about VM/CMS, the visit was a total disaster. They hardly seemed to know it existed. In getting information about future trends, it was of little value apart from what could be obtained from Ed Mathews. Talking to people gave some insight into where IBM were going. Some of this may be, almost certainly is, confidential and I will write up the little we managed to find out separately.

Some general points that are probably correct are:

5.2 Yorktown Heights

There are about one thousand four hundred people at Yorktown, one thousand research workers and the aim is for a terminal per person (they currently have seven hundred). The pure research budget is $120M. Research is a separate Division of IBM with its Director reporting directly to the Chairman. Yorktown is free to do research in any area and other divisions in IBM are free to use or not the results of that research.

The main areas of research include semi conductor technology, mathematical and physical sciences, as well as computing. They make extensive use of computers to keep administration down. Areas worked on include Josephson technology, lithography fibre optics, software, etc. Half of the one thousand four hundred researchers have PhDs.

We spent most of the time talking to J Huet (general introduction), D A Thompson (Storage Technology), R P Kelisky (Head Computer Systems), H H Zappe (Josephson) and A K Chandra (Software).

J Huet

He mentioned a number of projects of some relevance to the Laboratory:

Storage Research - Thompson

He made a number of general points about disc technology needing to attack all fundamental distances in the production (head size, thickness of coating, etc.) as well as just density.

He made the point that the 3370 disc heads are mass produced. The heads need both thin and thick lines in the wiring. Some areas need to be about three microns square. This causes some problems in the production.

To improve track density, you need to go to fixed track recording with separate servo tracks. To servo on the recording track gives a forty per cent overhead mainly due to settling time on switching between positioning and data recording.

The technology used in the RCA video system is to stamp holes on the disc. There are about five micron holes. It is suggested that five hundred tracks per inch can be attained using this technology.

The recording media itself embosses quite well. Holes tend to come out at half microns, half the original punch size. The two detectors for the servos are out of phase if it moves off line and it is easy to recognise and correct for phase change.

He made the general point that the optical systems have a hard limit to their storage capacity, due to wavelength of light and that they are approaching the diffraction limit. Magnetic media do not suffer from this cut off and even 100 Angstrom contained about 100 atoms.

He saw the magnetic tape side being driven by the video digital recording industry (2 × 107 bits/sq in compared with 4 × 106 for discs, 105 for tape and 106 for optical discs but it is probable that the disc would reach 106 and optical discs could not go much further). He expects to see one thousand tracks per inch, twenty thousand bpi, errors of the order of1 in 104 uncorrected. Recording rates up to 60 Mbytes/sec. Probably use three heads in parallel.

His main feeling was that although optical technology may come into its own in the short term, in a longer timescale there is ultimately more potential in magnetic material.

Josephson Technology

From a starting point of three people in 1968, they now have one hundred people involved in Josephson Technology (JJ) research. The basic principle is that a tunnelling effect can be achieved that allows current to flow if metals are close together (100 Angstrom) at superconductive state. The device consists of two semiconductors with a thin oxide layer in between.

Three JJs are put together to form a SQUID Super Quantum Interface Device. The thickness of the barrier oxide needs to be about 60 Angstrom. They achieve this by having a system which grows the oxide while a RF discharge removes it. Fine tuning balances the two effects.

They achieve gate times of 13 picosecs. (Rise time 7ps, fan out 6ps). Power is 3.4 microwatts per gate. Using 2.5 micrometre lines, 2 picosec switching and 1 microwatt dissipation is possible.

So far they have built a chip using 5 micrometre lines and tested it 1010 switches. It proved to be very stable. They are in the process of producing a 16K bit chip producing 40 microwatts with 15 nanosec cycle time as a store.

The chip design is similar to Dutchess technology, but uses their own E Beam system which they think is the most advanced in the world. It is not shaped beam. they can get down to 0.1 mu. This is achieved by firing at the plate which produce a small hole that expands out inside the plate. By cutting away the top level, this is at a much higher resolution than the complete hole. Alternatively, a very thin sheet might be used.

Their aim is to develop a 250 MIP computer. 300K circuits (1 nanosec), 256K byte cache (2ns) and 64 Mbyte RAM (10 nanosec) in a size 8cm × 8cm × 10cm. They hope to have a test vehicle up and running by 1983. Adding a cooling system brings the size up to 4ft cube.

It is very sensitive to external magnetic fields and would need magnetic shielding. Cooling down the system should only take about fifteen minutes and they can be brought back to room temperature in five minutes.

It is clear that IBM have a heavy investment in this area and have got a significant way forward.

Software

The most interesting item that came up here is that they are developing a mini computer, hardware/software from scratch called the 801. Should be twice the 3033.

Computing Systems

We had lunch with Dick Kelisky who runs the computing service at Yorktown. They tend to run their systems with 8 Mbytes of memory and four or five 2305s. VM and CMS are heavily used. They are fully convinced that this is the future path that IBM will have to take. Their changes to VM etc. can get into the released versions if sufficient pressure is brought to bare on the marketing and production side at Poughkeepsie. They, themselves, would welcome it. There was an implication that their EXEC system would be released.

RL/Yorktown Exchange

We talked a while about the possibility of exchanges between the two establishments. Yorktown would welcome this. It should be organised through Brian Kington in the UK. They have possibilities ranging from one year to three months and even their 2-week summer faculty programme.

The person who does the organisation is Dr Sadagopan who will visit the Laboratory on 15 October to discuss it further.

It was unfortunate that we got so little time to talk to anybody about VM or CMS.

6. PRIME

We spent one day at PRIME. Len Ford was there the previous day and obtained the answers to most of the outstanding technical questions.

6.1 BBN

We started with a joint meeting with BBN. Bob Harvey and Norton Greenfield from BBN attended and Andy de More, their PRIME salesman.

They had a major requirement to keep INTERLISP alive and had a similar problem to us in as far as deciding what to do subsequent to the DEC10s.

They had looked at a number of systems to see which to put it on. Their view was that it would be better to run INTERLISP with a small number of users on a large mini rather than have lots of competing users on a large mainframe.

The two systems looked at in detail were PRIME and DEC. (They will let us have details of this comparison. They had looked at many more but only VAX and PRIME were possible.) Their conclusion was that the PRIME microcode was far superior to that of the VAX. They felt it would almost certainly not be possible on the VAX. In terms of space, 1K RAM was not enough nor was 3K ROM and that was all that was available. They also have a need for either a large number of registers internal to the microcode or the ability to place constants in the microcode. In VAX there is not either. In PRIME there is both.

Their time frame for the implementation is to have a version of some kind running by early next year. At that stage it will not be commercially available. For example, it will have no garbage collect, but it will be able to demonstrate programs. The full system will not be available for another six to nine months. They emphasised that they would take all the steps necessary to bring out INTERLISP as a product and support it.

They saw the PRIME system as being the standard version in the future with the DEC system being left to die.

The cost of the product would be about $20,000 per system.

The implementation would be on a PRIME 400 and a 550 but not a 750. There are problems with mounting it on a 750 and the improvement in power would be minimal. The 750 has problems if it tries to drive another 4K WCS. The amps through the edge connectors would be too great.

he cost of the extra board for the 550 is about $12,000.

The question of whether BBN would like outside collaboration, was raised. They have sufficient effort on the project (four full-time and two part-time and their view was that it would probably be more disruptive. (They do not have an ARPA link to their PRIME system.)

However, they would be very happy for somebody to come and find out about the project. The best time would be November when the development would be a long way on. However, they had no objections to a visit in September.

It was left that I would contact the people at Sussex with possibly Rod Burstall or Aaron Sloman going as well. We should telephone Bob Harvey (I have his card) if we wish to take up that suggestion.

They have been working on the project for about a year and a half and microcoded a PDP11 initially.

They are likely to have to make minor operating systems changes to PRIMOS. They will ensure that there is an easy interface to FORTRAN.

BBN also have the requirement of mixing FORTRAN and INTERLISP programs.

There is a possibility that ARPA may fund the project. BBN did not disclose much information on this.

6.2 Future Products

Bill Poduska indicated that the company were happy with the 400 to 700 series in terms of bit efficiency of the order code etc. They saw a continuing evolution of the Operating System. Eventually the 64K segment limitation would be removed. Also they would allow direct segment access to files.

They hoped to cover whatever IBM did in the SNA area and to put more effort into graphics.

They hoped to develop the Badger machine, a low cost ($50K) 0.4 MIP machine.

The future developments were towards an ECL machine in 1981/82 called the FOX (2.5 × 750 and air cooled). The processor would have a ten-stage pipeline. A second system, APE, which would be a multiprocessor version, was scheduled for later.

The current effort on Operating System development was about twenty-five people. The total R & D was 300 people with 50 overhead, 125 software and 125 hardware.

The board needed for INTERLISP was discussed and Ian Edmonds agreed to write confirming that it would be made available to SRC in the future.

Outstanding orders were discussed with Bob Murrell. There appear to be no problems with 550 deliveries but the upgrade of a P400 to a P750 could be problem. It was agreed that delivery could be met this financial letter if a letter of intent was sent before the end of September.

The target for 750 deliveries was twelve a week, but they are currently only managing six. They produced about 50 550s a week. They intended to start a second shift on board production soon for the 750.

A tour of the manufacturing facilities saw an expansion from last year. However, the intention is to move most of the facilities to a single large location in time and release some of the existing buildings.

The only major change in production was that they now have a machine for placing modules onto boards.

The new front for the 750 is nearly as bad as the pregnant P400 cabinet designed to catch dust.

The new 80 and 300 Mbyte drives come with a Write Protect switch.

In the R & D we saw the ECL machine that proved to be about the same speed as the P750. It will not be taken any further but has proved useful in deciding on the design for FOX.

The 1 Mbyte board proved to be history. They had been unable to get chips from TI and just before NCC they had borrowed enough chips from Motorola to make up the board, but had to return them straight after the show. At the moment, the cost of chips is so high that it would be uneconomical. The existing board will take the 64K chips and only needs a few timing changes.

They have still not got a 600 Mbyte drive from CDC. It is due for delivery in November.

They have some 6250 drives from STC but the ones that will be offered are slightly different from the ones that have been tested so far.

They have a memory extension unit under development which will allow the memory to be extended to several megabytes.

7. DEC

The visit to DEC was in three parts. In the morning we met Gordon Bell, we then visited the VAX plant at Salem and finished up with a VAX/DEC10 question and answer session at Marlborough before returning to Logan Airport using the DEC helicopter - the most memorable flight of the trip.

7.1 Gordon Bell

Gordon is currently putting together a DEC museum in the Marlborough entrance hall. They have parts of WHIRLWIND, TX-0, a very old LINC and many others. It will be an interesting exhibit when it is finished.

It was clear that DEC view their main product in the future as the VAX range. Their major problem is getting rid of their existing ranges. They are still selling PDP8s in office automation systems. Their view is that if a product can maintain its market position then they will continue to sell it.

The PDP11 series is still selling well. The PDP11/44 will be available in nine months time. They will also announce a PDP11/23 (256K bytes of address memory and LSI-11 like). They may announce a PDP11/24.

The VAX series will go down to PDP11/34 in size. This will be based on a single chip. The aim will be to actively go down market until you get VAX in an intelligent terminal. Up market, the VAX would double in processor speed and also come out in symmetric multi processor configurations.

At the low end, Winchester 3in discs would be available the personal computer level.

The DEC technology is produced, or will be produced, using EBL mask making but conventional optical fabrication.

It was clear that the architecture of the DEC10/20 series would not be extended, although there was a commitment to existing. users. In two to three years, there was likely to be a 2 × DEC KL processor at a lower price than current systems.

The future for AI seems to be to miss the DEC10 enhancement and go to VAX - that was Gordon Bell's view.

The knowledge of IBM was either low or they were not telling. They were obviously worried about IBM competition, but without really knowing why.

The general view was that CCDs haven't and wouldn't make make it. Bubbles may make it, but at the low end.

The only real change in disc and tape he saw, apart from obvious progressions, was an 18-track non-blocked IBM tape system, needed for large non-removable Winchester discs.

DEC were looking at bus connections between processors, probably based on their UNIBUS experience. It did not sound as though a product would arrive for quite some time.

An interesting point was that a multi drop version of the DMC11 was coming out. That would be a feasible way of coupling the SNS PDP11 systems to some central system.

7.2 SALEM

Salem is a final assembly plant. It is one of a number. Westminster is probably even larger. It is made up of several well-defined physical areas, each producing a specific product. They have ten main sections, two of which are devoted to VAX production. They also produce 11/70 and 11/34.

The plant starts from already populated boards and builds CPUs for store. These are pretty well standard, although they have versions with and without MASSBUS interfaces. The processors go into store and then come back for system integration.

The store room was large, about five levels high and filled with VAX and PDP11 products - really incredible. They have about five weeks' stock of processors, typically, 180, but they had nearer 230.

The plant only works one shift plus overtime. Overnight diagnostic testing is carried out. Increasing the number of shifts would almost certainly mean that people would stand around baby minding.

They had a minimal capability to do repairing of boards, etc but it is mainly an assembly plant.

They were looking at flexi time. The general feeling was that they cared about people.

Quality control seemed to be quite strict. They rechecked input equipment almost immediately and there were a number of check stages in the production. About ten per cent of the production went through a quality control section where it was checked right down to the type of nut being used in the assembly. Almost every system was rejected at this stage with good statistics kept of the reasons. This enabled changes in procedure to take place where necessary.

The whole integration/quality control system was called MAST - Modular Approach to System Test.

VAX are produced in volume production mode to store. This takes about six weeks manufacturing and ten weeks total to get it to store. When systems are made up from store, it takes about six weeks during the putting together and testing. In theory, it is impossible to get a system in less than about six weeks from the order.

At SALEM, ECOs, wiring faults, etc. will be corrected. Wiring changes are colour coded:

WHITE
Board change to make it work
GREEN
ECO
YELLOW
ECO
BLUE
Field change

Any board with a large amount of white wires implies it could well be a Friday afternoon one.

The diagnostic testing is less automated than expected. They do not initiate and log diagnostic running centrally. Diagnostics can be run in serial from a floppy disc.

In the final test, all components of a system are tested, but not as a total system. One disc would be put on each MASSBUS, but not the total set of discs in the configuration. Consequently, crosstalk is never tested.

7.3 Marlborough

We started by getting a number of replies to specific VAX questions:

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site