Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ Overview □ 1984 □ JanuaryMarchMayJulySeptemberNovember □ 1985 □ JanuaryMarchMayJulySeptemberNovember □ 1986 □ JanuaryMarchMayJulySeptemberNovember □ 1987 □ JanuaryMarchMayJulySeptemberNovember □ 1988 □ JanuaryMarchMayJulySeptemberNovember □ Index of issues □ Index
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersFORUM
CCDLiteratureNewslettersFORUM
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1984
JanuaryMarchMayJulySeptemberNovember
1985
JanuaryMarchMayJulySeptemberNovember
1986
JanuaryMarchMayJulySeptemberNovember
1987
JanuaryMarchMayJulySeptemberNovember
1988
JanuaryMarchMayJulySeptemberNovember
Index of issues
Index

May/June 1986

Editorial

The good news for everyone in this issue is the coming of the CRAY X-MP/48 to the Atlas Centre as one of the implementations of the Working Party on Future Facilities for Advanced Research Computing; see Brian Davies' article on this page and the invitation to the Open Meeting for potential users. We will have more details in the forthcoming CRAY supplement; please tell us what you would like to see included.

Bob Maybury's progress report on the CERN-RAL collaboration also includes a plea for your comments. Here is your chance to add your "two penn'orth.

This is a bumper issue with a pull-out supplement covering the ECF Recommended Graphics Devices; for those who have not met this set of initials before, Mike Jane explains all.

Besides our regular features we also have a Letter to the Editor (these are always welcome), the conclusion of ERCC's DEC-10 biography and reports from a crop of Spring meetings.

Ros Hallowell, Editor

CRAY X-MP/48 for Advanced Research Computing

In the September/October issue I summarised the recommendations of the joint Working Party set up by the Advisory Board for Research Councils, the Computer Board and the University Grants Committee on "Future Facilities for Advanced Research Computing". Briefly, the main finding of the Working Party report was that there was a very strong case for the provision of new advanced computing facilities so that opportunities could be taken of advancing knowledge and understanding by computational methods in almost every branch of science. The report recommended that ABRC, the Computer Board and UGC should secure and allocate funds centrally for a national facility for advanced research computing. This would comprise:-

During the autumn the report was discussed widely within the Research Councils, at the Computer Board and the UGC, and was generally well received. At the February meeting of ABRC it was agreed to proceed with the initial supercomputing items recommended by the Working Party, and a package deal involving ABRC, Computer Board and UGC funds was put together to enable a 2 Mword Cray IS to be installed at the University of London Computer Centre and a Cray X-MP/48 in the Atlas Centre at RAL. The Cray 1 S will be delivered to ULCC during the summer, and no doubt ULCC will be providing information on progress etc. The remainder of this article is about the Cray X-MP/48.

The X-MP/48 is 4-processor machine with 8 Mwords of memory. Each processor is somewhat faster than a Cray IS and the peak theoretical performance approaches 1 GFlop (one thousand million floating operations per second). The machine will have a 32 Mword Solid State Device and about 14 Gbytes of disk storage, and it will be front ended by the existing RAL IBM mainframe system, and possibly also by other types of machine later.

The X-MP/48 will be operated by SERC on behalf of the Research Council and University communities. Authorisation to use the Cray will be granted via a Peer Review procedure which will be based to a large extent on the Peer Review mechanisms already used in the Research Councils. The precise details of this are being discussed at the moment.

The machine itself has just been ordered. We have not yet been given a definite delivery date, but it is possible that the computer could arrive around the end of this year. There will then be a period of installation and of acceptance tests and we expect to be able to provide a service to the first users within a few weeks of the delivery date.

Brian Davies, SERC Director, Computing

Open Meeting for Potential Users Tuesday 8 July

At this stage there are a number of matters still to be resolved and it is certain that potential users of the facility will have questions they wish to raise. We are therefore planning an open meeting to be held in the Great Hall at Imperial College London on July 8. This will cover the details of the equipment to be installed, the front-ending facilities and access via Janet, and the arrangements for Peer Review and for exploratory use of the machine.

The attendance is limited only by the capacity of the lecture theatre (about 600). Morning coffee and a light lunch will be provided courtesy of SERC.

If you would like to attend this meeting please contact me, Mrs Jacky Hutchinson, at the Atlas Centre, RAL.

Jackv Hutchinson, User Support and Marketing, Central Computing Division

Prime General Meeting

The third Prime General Meeting was held at Warwick University on 11th April following a Prime Managers' Meeting on the previous afternoon. There were 36 attendees, made up of three equally represented groups: Managers, users and members of the ICF team (from RAL and UMIST).

The main points of the meeting were:

Attempts had been made to persuade Salford University to make their Fortran77 compiler (FTN77) more compatible with Prime's Fortran77compiler (F77). Although Salford were prepared to make some changes they lacked the effort to provide compatibility for passing character arguments. Because F77 compiled code is accessible to C and PASCAL programs it was resolved to compile Fortran77 applications libraries with F77 and only guarantee their working with F77. It was also agreed to maintain support for FTN77 because of its faster compilation speed and additional compiler options which made it a friendly product.

PRIMOS 19.4 had been running at UMIST for several weeks and was about to be installed at RAL. It would be distributed to the remaining sites in May.

UMIST P9950 had been upgraded to a P9955, and RLPA and RLPB (both P750s) would be merged into a single P9955.

Maintenance contract funding problems persisted with Hatfield and Middlesex Polytechnics and support for these Primes was suspended until the matter is resolved.

NETLINK had been modified to support TS29 and restrict network access to authorised users.

The NAG On-line Supplement would be made available to all Prime sites via the network.

The next version of the GKS library, which supports the Benson plotter, will be released at the end of May, although this date may be impacted by the above decision on Fortran77.

PRIMIX (Prime's UNIX offering) was being thoroughly evaluated by RAL. A contract with Salford University to provide a JTMP for the Primes was being negotiated.

Previously notified questions to SERC were dealt with to the meetings' satisfaction.

In the afternoon, representatives from Prime attended. They stated that the education market was very important to Prime. There followed a presentation outlining the improvements Prime had made to their Field Service in the past year. It was hoped that these changes would address the difficulties that some sites had been experiencing with maintenance.

Prime's representatives were unable to answer any of their previously notified questions. The meeting was most dissatisfied with this position. Prime promised to take the questions, along with questions tabled at the meeting, and provide written answers within a month.

Dave Lomas, CSC, UMIST, Secretary of Prime User Group

Joint meeting of GEC users and managers

The second joint meeting of GEC users and managers was held at Cosener's House, Abingdon on 2 - 3 April. Everyone appreciated that the entire meeting, formal and informal, taking place under one roof, enabled a maximum exchange of information.

Routine matters of support and documentation were considered and users generally felt their requirements were being attended to by RAL staff. However they would like more notice of changes to the system which could affect their work. Site managers agreed to bear this in mind.

The non-existence of network support causes great concern.

The implementation of GKS is eagerly awaited; we hope it will not be delayed too long by the staff shortage in graphics section.

The screen editor, WS, implemented by Cardiff under contract from RAL, was welcomed.

Users were asked to determine the demand for PROLOG which is being implemented by Loughborough again under contract from RAL.

Two topics occupied a great deal of users' time: how to gain access to the policy-making process whose decisions affect users and how could users best be made aware of all the information available on the RAL supported OS4000 systems.

The users' Group reports to the ULC through its chairman but users felt that it might be more appropriate for the group to report in some way to the Engineering Board. This matter is being investigated.

It was agreed that there was a lack of "information about information" and new users in particular could benefit from more information. One of the users, Andre Schappo from Loughborough, kindly offered to produce a document indicating how information could best be presented to all users.

Two tutorials, on Networking and GKS, were given in the afternoon.

The next meeting is scheduled for 23 - 24 September.

Anne MacKinnon, Group Chairman

The Edinburgh (ERCC) DEC-10 Computer

Part two of a two part biography.

Chapter Six (1980-1983)

The next development was the decision by the SRC in February 1980 to replace the existing ERCC DEC System-10 1070 (with a KI cpu) by a DEC System-10 1091S (with a KL cpu) and to close the UMIST DEC System-10. This was in effect an upgrade of the total DEC-10 resource to solve the overloading problems of the two existing machines and to increase the available address space to the user community. There was also significant financial saving as a result of this upgrade. Due to the increased floor loading of the KL version of the machine (which was too great for the old location) and for reasons of staff economy, the new version was to be installed in the ERCC main machine room with no dedicated operators. Other enhancements to be carried out simultaneously included another doubling of memory up to 512K and a tripling of disc space to 750Mb. The resulting juggling with disc drives eventually included some transfers from UMIST and for a short period in the summer of 1981 the storage space reached 1100Mb!

The installation actually took place in November 1980, the machine being out of service for only twelve days. As a result of the changes the machine was found to have 2.5 x the power of the KI version and the throughput was quadrupled.

Network reliability improved further as the DEC System-10 became more established as an X25 based system. The UMIST machine had finally closed in May 1981 and had been replaced by a PRIME 750. Eight of the former main user groups at UMIST transferred to the ERCC machine.

General consolidation of the service followed over the next two years with performance monitoring and fine tuning further improving response time and throughput. The number of users steadily rose though it is impossible to specify the total numbers of individuals involved since usernames were allocated to projects with no detailed records for each separate worker. A maximum output in excess of 16,000 AUs was achieved in the year 1983/84 (an AU is equivalent to 5 hours of interactive terminal connect time in Prime Shift).

Despite its obvious success the future of the DEC System-10 still remained in doubt; closure seemed inevitable and, after much discussion with proposals and counter-proposals, the final decision to shut down in March 1985 was taken by the Science and Engineering Research Council (SERC), as SRC had now become, and announced in September 1983. This resulted in the freezing of any further development and marked the beginning of a steady run down in the operation of the machine and the facilities provided.

David Mercer became Systems Manager in May 1983 but transferred to other work in August of that year as a result of the planned closure of the machine.

Similarly, Charles Mackinder transferred to other duties from April 1983 but continued to supervise financial aspects till final closure at ERCC. Jeff Phillips (from user support) took on the job of Systems manager of the machine for its remaining years of operations.

Bernard Loach, Mike Jane, Charles Mackinder and Jeff Philips at a Management Meeting

Bernard Loach, Mike Jane, Charles Mackinder and Jeff Philips at a Management Meeting
Full image ⇗
© UKRI Science and Technology Facilities Council

Chapter Seven (1984-1985)

A reprieve in the closing date came when the planned alternative infrastructure facilities for the AI community, now supported through the Alvey initiative (via SERC's Information Technology Directorate), failed to materialise quickly enough and the ITD agreed to fund an additional six months operation of the DEC-10 by ERCC, bringing the closing date to 30 September 1985.

The machine underwent one further change of location in July 1984 - the space occupied in the main machine room at ERCC was required for new developments of their own facilities and it was agreed that the DEC System-10 should be moved at ERCC's expense to its final Edinburgh University location: the Appleton Tower in George Square. At the same time a final upgrade increased the memory to 760K and disc capacity to 900Mb.

Needless to say the demand for resources remained high "right up to the death". However all required user files were successfully copied to tape and passed over to the users concerned before the system closed. The success of this particular operation was largely due to the efforts of Janet Dalitz.

Chapter Eight (Summary and Acknowledgements)

The DEC System-10 has had many users over the years, reaching a peak during its life as part of the ICF. This was in 1981/82 when 74 active allocation holders were listed (the number of active users at this time was between 300 and 350).

The total output in nearly nine years use as an ICF machine in terms of AUs was slightly in excess of 100,000 (this equates to twenty five interactive terminals simultaneously connected for the whole of every prime shift throughout the nine years). A maximum of 16,030 AUs was reached in the year 1983/84. It is noteworthy that in terms of operating cost per AU produced the Edinburgh DEC System-10 was always the most cost effective machine in the ICF system.

The success story of the ICF DEC System-10 at Edinburgh is the direct result of the efforts of a large number of people at the Edinburgh Regional Computer Centre who provided a high quality service backed up by efficient and professional management and support. The ICF Management at RAL wishes to acknowledge the contribution of everyone at ERCC, past and present, associated with the DEC System-10.

Bernard Loach, Informatics Division
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site