This issue of FORUM contains the report of the Central Computer Representatives Meeting. Although this is dominated by IBM specific material, there is one item of very general interest on Network Mail. A serious attempt has been made to restore the balance by keeping other IBM specific items this month to a minimum.
The problems associated with the move to Rev 19.1 of the Prime operating system have been solved. Additional memory is now on order and will be delivered by 1st August. A draft schedule for mounting Rev 19.1 on each machine is being discussed with the individual sites. The final schedule will be published as soon as it is agreed.
The closure of the Dec 10 at Edinburgh and the impact on the main SERC APL service are brought to users' attention in this issue.
The service overall, despite the abnormally hot weather, has been good apart from some recent problems with VM on the 3081D which resulted in a poor CMS service. Measures to overcome these difficulties have been implemented, but it is too early to say how effective they are.
The central IBM system is currently suffering from I/O contention problems. Users should be aware that we are very conscious of this, and that every effort is being made to come up with a sensible solution. Unfortunately, this cannot be achieved overnight and much careful study is required.
IBM batch usage has increased significantly in the last two weeks, largely due to a number of large High Energy Physics groups starting their production work. The Atlas 10 has continued to perform well and is now producing some sixty percent of the batch cpu hours delivered to users. Discussions are in progress with ICL which may bring forward the handover date by three weeks to 8th August.
The Network Executive vacancies have been widely advertised in the press, and hopefully will lead to a successful recruitment exercise. Two Joint Network Team posts were included in this advert.
Regrettably, we have been forced to reduce the Program Advisory office service from 8th August. Full details are included on page 2.
The major event was the approval for a paper to go to Council recommending the purchase of a 110 Gigabyte M860 mass storage system from MASSTOR International, Assuming it is approved by Council, the system would be delivered in the New Year and we would hope to start a service using it at the same time as the full production MVS service.
A large part of the meeting was spent considering ways in which recurrent costs could be saved in the current financial year and later years. The Central Computing budget has been seriously impacted by the cuts to the various Boards. All Boards other than Engineering have made drastic cuts to their contribution to Central Computing. As a result, a much slimmed down service is envisaged. In the current year, we have moved to DPCE for the maintenance of the Memorex discs and have decided to adopt a new approach to maintaining both terminals and machines. In the case of terminals, we will stop the yearly maintenance and instead keep a set of terminals for replacement. When a terminal breaks, it will be returned to RAL and a replacement sent to the user. Fast express parcel carriage services make this possible, we hope, with no worse a service to the user. The terminals that are broken will then be replaced as a set. In the case of the computers, we intend taking a number of the GEC computers off maintenance and paying for repair as and when they break.
In order to give the operators sufficient time for training in MVS and to reduce the current costs long term, we shall be moving from 5 to 3 operators on the basic 5-shift system. This will have an impact on the number of tapes that can be loaded per week. Although there is a long term plan to move active data off magnetic tape, this cannot begin seriously until the mass storage system and MVS are fully in operation. In the meantime, the amount of resource delivered by the mainframe systems will be less. The exact effect on the user population will not be known until the new system has been in operation for a few months. We apologise in advance to users, but this has been forced upon us by a need to reduce staff costs in the current and future years.
In order to review the way computing is funded and managed in the Research Council and University sector, a joint SERC/Computer Board Working Party has been set up to review the scene and report back by Christmas.
The need to reduce manpower led the Computing Division to explore the possibility of freezing support for the PRIME systems and stop networking support for VAX/VMS systems. There was strong support for continuing both services at the meeting. As a result it was decided that 3 MY of support currently used for central batch service would be moved to supporting the interactive systems. This has already been done and will allow a small amount of network support for VAX systems to continue and enable us to move to Rev 19 of the PRIME operating system.
The Program Advisory Office service has to be modified in the light of reducing manpower and the consequent unreasonable load placed on the people with the necessary expertise to provide the service.
From Monday 8th August 1983, the PAO will open from 1400 to 1630 Monday to Thursday and 1400 to 1530 on Fridays. During these periods users can visit the office if they wish.
Queries must be sent via GRIPE, ASKUS, TELLUS or NOTEs to US except in cases of extreme urgency when the telephone may be used. Routine and non-urgent enquiries will not be accepted by telephone. Emergency service outside these hours can be obtained by contacting the Shift Leader.
The service may be further reviewed at a later date in the light of experience gained from this change.
The following is a list of future courses/meetings.
For further information and enrolment, please contact either the Program Advisory Office on Abingdon (0235) 44 6111 or Garry Williams on Abingdon (0235) 21900 ext 6104.
GEC have recently announced a new computer, designated the Series 63. The architecture of this machine differs from the established 4000 series, in that it is based on 16 general purpose registers (32 bits wide) and a 32 bit bus. The addressing limitations of the old 4000s have been overcome by the use of a paged Virtual Address Space using full 32 bit addressing. This gives a virtual address space of 4096 Mbytes for each process of which there may be 4096 in a system.
Initially the system may have a maximum of 12 Mbytes of real store, but this will be increased to 48 Mbytes in future. There is an architectural limit of 256 Mbytes on the store size. The store cycle time is 250 ns and a pipelining mechanism enables the processor to achieve a instruction rate of 3 MIPs. I/O is performed by 'secondary processors', which operate independently of the main processor, and use direct memory access across the bus.
Other important features of the system include comprehensive security mechanisms based on privilege levels, and extensive self test hardware built into each component of the system. Floating point arithmetic is available for numbers up to 128 bits, and there is an optional floating point accelerator for improved floating point performance.
A choice of two operating systems will be offered with the Series 63, UX63 and OS6000. The former is a virtual memory UNIX system derived from UNIX System III from AT & T, whilst the latter is a GEC operating system which provides the user with an environment similar to OS4000.
The standard set of languages is supported, including FORTRAN 77, C, PASCAL, etc. Extensive communications facilities are provided via a front-end processor using standard protocols such as X25 and Ethernet.
The PRIME 2250 is a small multi-user minicomputer compatible with the PRIME 50-series machine range. This range includes the 550 and 750 machines widely used by SERC. The 2250 is designed for an office environment and is small (1/3 cubic metre) and acceptably quiet. It requires a single 13 amp 250 volt supply. The machine can be configured with up to 4 Mb of memory and two Winchester disk drives of 68 Mb or 158 Mb each in the basic chassis. It also includes as standard a 1/4 inch cartridge tape drive providing up to 15 Mb per tape of off-line storage for backup, etc plus synchronous line for networking and 8 asynchronous terminal lines.
One attraction of the machine for existing PRIME users is its total software compatibility with the other PRIME machines currently supported by SERC, It runs the same operating system, compilers; utilities and networking software, and all applications programs will run on it unchanged simply by transferring the SEG files. This feature is of particular interest in those application areas which involve a considerable ampunt of interactive design work using software on the PRIMES at RAL accessed over the network or -on leased lines. A local 2250 could provide a better interactive environment for this software, particularly if graphics is involved, and could be fully networked into SERCNET to provide access to other machines given that manpower was available to support systems maintenance and networking or another system.
Computing Applications Group in Technology Division at RAL has recently evaluated a 2250 on loan from PRIME as a design workstation for the VLSI community. This application requires the use of high resolution colour raster graphics terminals such as the SIGMA 5688 for interactive design. The 2250 allows these terminals to be connected at 19.2 Kbaud. Benchmarks using the GAELIC interactive 1C layout editor indicated that a 1 Mb 2250 would support 2 to 3 simultaneous users on such graphics terminals with acceptable response, the main limitation being paging. With 2 Mb of memory, 3 to 4 graphics terminals could be supported, the main limitation then being time-sharing of the CPU. The machine would of course support more than 4 users doing less demanding work. The CPU power of the PRIME 2250 was measured as 0.45 of a PRIME 750 for mainly integer computation and 0.34 of a 750 for floating point work.
The current financial situation has caused Computing Division to re-examine operational methods in some areas with a view to making savings. Two area of current activity have been studied and alternative cost-cutting strategies have been devised. These activities are the maintenance arrangements for terminals and for some Multi-User Mini computers.
Hitherto the practice has been to set up fully comprehensive maintenance agreements which, for a fixed sum per annum per unit, provide Routine Maintenance, repair service and replacement of faulty parts. This is a very expensive practice, as the supplier has to guarantee availability of spare parts, and trained staff and to cover himself against too many faults per unit.
Examination of records proves that the majority of terminals have very good fault records. This is equally true of Multi-User Mini computers. This allows us to change the policy on maintenance of these types of equipment as described below.
A scheme is to be set up based upon the use of a pool of 'spare' terminals, express carriers and bulk maintenance arrangements. Terminals will be repaired at a manufacturer's or maintenance agency's base workshops, or possibly at RAL by a visiting engineer, and the repaired terminal returned to the pool. In general every effort will be made to replace a terminal with an exact duplicate, although sometimes it might be necessary to use a functional equivalent. A table of near functional equivalents follows. Some terminals which are very large or weighty or particularly sensitive to disturbance cannot sensibly be handled with this kind of arrangement. Terminals falling within this category will be subject to contracts based upon time and materials including a fault-call response value. Where it is necessary, some Routine Preventive Maintenance will also be contracted. A working target will be for user service to be returned within 48 hours, and it is possible that this may be easily beaten in a great many cases. This compares favourably with the measured performance of 4 to 5 days on the Cable and Wireless contract. The annual saving from this exercise is expected to be in the region of £100,000.
To assess the impact on service there is to be an initial investigation with ten GEC 4000 systems. The present maintenance contract with GEC costs £400,000 per annum plus VAT.
The alternative arrangements for these machines will involve contracts to cover Routine Preventive Maintenance Guaranteed fault-call response with costs based on time and materials. This scheme is estimated to save in the region of £100,000 per annum without impacting the quality of maintenance support. The machines involved will be a selection of Multi workstations. The initial list of ten if as follows:
This covered Divisional reorganisation, described in FORUM No 35, and the CCC meeting described elsewhere in this issue.
The Atlas 10 is to be handed over in September and will be running production work full time from then. The details of the configuration of the various MVT systems has yet to be decided. An internal MVS service will also begin then, either on the 3081 or Atlas 10.
It has been agreed that from September the 3032 will be taken out of the scientific service and dedicated to SERC administrative computing. Some peripheral equipment (disc, tape, communications) will also be taken, and there will be some additional terminal support overhead put on the 3081, but it is expected to recompense the scientific service by the purchase of new equipment - a solid state paging device is one possibility.
If the MASSTOR M860 is purchased, as recommended by CCC, then apart from giving large amounts of effectively on-line storage to MVS users, it is hoped to use it to offer an archiving service to SERC network users.
One final development is worth mentioning - an inexpensive upgrade to the HAL standard Cifer terminal will be available very shortly to enable it to be used, optionally, as an IBM screen when accessing CMS over the network or through PACX.
Extra requirements are imposed on computer mail systems by the ability to send and receive mail from other computers on a network. A means for mail services on different computers to communicate, and compatibility between them, must exist. The interim JNT mail protocol provides communication using NIFTP, and through the definition of a standard message header containing a number of fields (e.g. TO:,FROM:,SUBJECT:) that different mail systems on different computers both generate and interpret. Network mail also requires more sophisticated mail systems, for example facilities such as distribution list handling, real name matching and hardcopy memo support. Once sending computer mail to people not on your local computer becomes feasible, there exists the problem of how to address them.
This problem has been overcome within Computing Division by setting up a directory containing the address of every member of the Division. An M printer is used to generate hardcopy memos for those who do not use a computer regularly. These memos are entered directly into the manual post system. Names are stored in the directory as initials and surname only, e.g. P.J.Newton, and the computer on which the directory is kept is known to the other Computing Division computers as RL or RAL. In fact the directory is currently located on RLGB.
JNT mail is supported on GEC, Prime, VAX/VMS, CMS and (shortly) PDP11/UNIX. Users of these machines can therefore send computer mail to a member of Computing Division by sending it to site RL or RAL, addressed to the recipient's real name in the format A.N.Other.
SAS (A Statistical Analysis System) has been mounted on the 3081 under CMS. This system will be used to provide accounting and performance analysis for the MVS system. Users are being allowed access to the SAS system as an unsupported feature of the central computing system. This talk gave a flavour of the ways SAS can be used to perform some simple analysis of data and a demonstration of the potential results available.
The RMAIL command has been replaced by the NOTE and RECEIVE commands from release 2 of CMS. This talk outlined how these commands are used. Plans to modify the manner in which the commands are used to make it more 'user friendly' were outlined.
The principal SERC APL service runs on the Edinburgh Dec10, which is scheduled to close down in October 1984. The users of this service have indicated that they will not need an alternative service to be provided when the Dec10 is removed.
This note is designed to warn existing and prospective APL users that unless they contact me by the end of September, stating their requirements, no alternative service will be provided.
In the mid 1970s it was decided to develop an X25 communications network for SERC. Andrew Dunn produced the exchange (or switch) code which ran on a GEC 4000 computer. This first operated in 1977 and the first hosts to be connected were GEC 2050 workstations, the IBM 360/195 and the GECr multiuser minis. Since then the network has grown to 11 switches and around 160 connected computers or DTEs (Data Terminal Equipments). The network was never expected to grow beyond a single switch and a handful of DTEs, let alone to its current and still expanding size. The switch code was as simple as possible so that it would be reliable and easy to produce. Over the years it has been enhanced in many ways but it is now apparent that it needs a radical rewrite, in particular to improve management facilities which are very primitive. Currently these are provided by one of the DTEs continually making calls to all the others. From the results of these 'polls' the state of the DTEs can be found and the state of the network inferred.
Rather than rewrite the code it was decided to adopt the switch code being developed by GEC. This has the required management features and was designed for a large multi-switch network. GEC will support the software and will enhance it follow any developments in standards which arise.
In mid 1982 it was decided to use the GEC code, and Jonathan Mills started working towards its adoption. It was late 1982 when the first suitable version of the GEC code was received. Before it could be used it had to be enhanced so that it could take the place of the SERC switch code without any other changes being made to the network. This involved the following enhancements:-
There were a number of more minor changes. These changes were completed by the end of 1982 and to schedule. Since then testing has taken place which has taken a lot longer than expected. It is vital that when the GEC software is put into service that it performs at least no worse than the current SERC code. The problems have been in a number of areas :-
At the beginning of June the large switch at SERC used GEC code for most of a day and proved that the code is now satisfactory for use on the network, or at least that there is enough confidence to try an extended test on the live network. Unfortunately this is not quite possible. The GEC code takes much more memory as it contains management code and is more sophisticated. This lack of store shows itself in poor performance. There is now a half megabyte GEC 4160 on order to replace the 256K switch and the 256K of store released will be distributed to the other switches so that they are large enough to support the GEC code.
Although it had been hoped to switch to GEC code in March it is far more important to ensure that the switch is not going to be disruptive than that it is done quickly. None the less the switch is urgently needed to allow the far more advanced GEC management code to be used. The manpower to work up the GEC code has been quite modest at about 5 man months although the elapse time has been longer.
It is a pleasure to acknowledge GECs help and encouragement in this development.
Much has happened since the last Forum article. PERQ UNIX (PNX) became generally available in March of this year; this release of the operating system offered a Window Manager and the C compiler. By the time this article appears, a new release of PNX, with a Fortran 77 compiler and an improved Window Manager should have been distributed. (The system has passed the ICL quality assurance tests; software and documentation are being duplicated at the time of writing (early July).) A pre-release of the Pascal compiler has been made available for use at RAL only, unfortunately). Limited testing has disclosed no problems with this compiler as yet.
GKS development on PERC has proceeded rapidly since the delivery of the Fortran 77 compiler; at present developments are at (approximately) level Oa (no input, simple output) with completion of development to level 1b and pre-release targeted for the end of August.
On the communications front, the asynchronous ftp system, which allows robust file transfer over asynchronous lines to ICF Primes and GECs, has completed its first development phase and user documentation is being prepared. The X25 developments at York for PERQ are now nearing completion, and the end of July should see a prototype working. Autumn should see the completion of the Basic Block Protocol on Cambridge Ring for PERQ. (It is not now clear that the higher level transport Service Byte Stream Protocol will be implemented - any user likely to be inconvenienced by this should contact the author of this article.)
Any problems with PNX or any other Common Base component should be communicated to the Common Base Support Office on extension 6488 at RAL, or by direct dialling to 0235 44 6488. The office is staffed from 10-12 and 2-4 weekdays; the hours may be changed as demand increases. A message recording system is in use outside these times.
The plans to close the Dec10 are well in hand and we have now passed the "point of no return" (ie the plan cannot be changed to prolong the service beyond October 1984). The provision of facilities for the Dec10 users are in hand. The Artificial Intelligence community will benefit from the AI/IKBS initiatives of SERC and the Government supported Alvey program. The remaining users will be provided with resources on existing facilities, and discussions are in hand with these users to identify their specific requirements.
Any Dec10 users who are not satisfied with the way their future needs are being dealt with, should contact the User Support Office at ERCC, or speak directly to me at RAL.
Readers of previous editions of FORUM will know that Graphics Section embarked on an implementation of GKS - the future ISO standard graphics system -in mid 1982. Since then GKS has itself become much better known and is rapidly being accepted in the USA as a standard. Work is now proceeding on standards for graphics metafiles, virtual graphics terminals and 3D: all these are using GKS as a backbone.
Our GKS implementation project has achieved many milestones - notably the drawing of pictures! The design work that we started in June 1982 continued into October 1982. This long design phase was partly intentional and partly caused by some significant additions to GKS at its last technical review, at Eindhoven in June 1982. However, the same month we finished the design, GKS drew its first lines.
Since then we have completed work on all output functions - remember GKS has pixel arrays and pattern-filled areas as well as the more usual lines and text. Design work for input and segments has been our task more recently and implementation of these functions is now in progress. We currently expect that coding of the system will be complete in mid-October, with the documentation being finished in mid-November. The development has been done in parallel on PERQ (under both POS and PNX) and VAX (under VMS). We have working (but incomplete) systems on both PERQ/PNX and VAX/VMS. Some time has had to be spent making sure that installation on other systems will be as simple and error-free as possible.
We currently have worked on device handlers for PERQ screen, Tektronix and Sigma terminals and a metafile. Many more device handlers are needed and work on these will be started in late August. The handlers are being written in a way which maximizes the amount of code that can be transported to other operating systems; in addition, the handlers are written so as to share a pool of utility routines, thus reducing the lead time required for completing a new device handler (and storage requirements when running GKS).
Further information on the progress of the project can be obtained from RAL Graphics Section, RAL extension 6565.