Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ Overview □ 1984 □ JanuaryMarchMayJulySeptemberNovember □ 1985 □ JanuaryMarchMayJulySeptemberNovember □ 1986 □ JanuaryMarchMayJulySeptemberNovember □ 1987 □ JanuaryMarchMayJulySeptemberNovember □ 1988 □ JanuaryMarchMayJulySeptemberNovember □ Index of issues □ Index
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersFORUM
CCDLiteratureNewslettersFORUM
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1984
JanuaryMarchMayJulySeptemberNovember
1985
JanuaryMarchMayJulySeptemberNovember
1986
JanuaryMarchMayJulySeptemberNovember
1987
JanuaryMarchMayJulySeptemberNovember
1988
JanuaryMarchMayJulySeptemberNovember
Index of issues
Index

September/October 1985

Future Facilities for Advanced Research Computing

At the end of last year the Advisory Board for Research Councils, the Computer Board and the University Grants Committee set up a joint Working Party "to consider and report on the likely needs for advanced research computing and on the various options open to the University and Research Council community for acquiring, operating and providing access to the necessary services". The Working Party was chaired by Professor A J Forty of the University of Warwick. Its report published on August 15 has been widely covered in the Press.

The Working Party finds a strong case for new provision and believes there are opportunities for advancing knowledge and understanding by computational methods in almost every branch of science. It recommends that ABRC, Computer Board and UGC should secure and allocate funds centrally for a national facility for advanced research computing. This should comprise:

The central facility should be developed in stages. The first stage would be the installation in 1987 of a Cray X-MP multiprocessor at RAL. A second stage involving the installation of a second supercomputer in 1990 should be considered in 1988.

The total cost of the programme for the period 1986 to 1991, and including a second supercomputer, is £69M.

No decisions have yet been taken on whether or not the recommendations are to be implemented. The ABRC has invited the Research Councils to comment by the end of the year, and the Report is also under discussion by the Computer Board and the UGC.

We will keep you informed of any developments.

Brian Davies, Division Head, Central Computing Division and SERC Director of Computing

The Masstor M860

Now that the production MVS service has settled down we are in a position to introduce the data management software ASM2. It will be introduced about the end of September. Its first function will be to improve the way data is backed up, but this should not have any impact on users. When that is working well full data management will start. This article gives an overall description of the way the system will work. It is an update on the description in the MVS Conversion Guide; more details will be given later in User Support and Marketing Group documentation.

The M860 will be used by the system to increase logically the amount of disk space available. Users will not be able either to store data into it or to access data stored there. The system will maintain three levels of data storage. The first will be real disk, the second on-line M860 cartridges, and the third will be off-line M860 cartridges.

Data will remain on disk while there is a high probability that it will be used. When it has been unused for some time it will be moved to the M860 and the disk copy of the dataset will be scratched. For safety two copies of data will be made, one will be kept in the M860 and one will be held off-line. With the present version of the software the dataset will remain in the MVS catalogue. When a job executes which requires a dataset that is in the M860 it will automatically be moved back onto the disk where it was originally stored. The movement of data will take place while the program is in execution, and if there is not enough space on the disk the program will abend. We believe that there is sufficient free space on the disk to ensure this does not happen. Since this movement of data is completely automatic you may not be aware that it happens. If users do want to know if a dataset is in the M860 they can use the ASM2 command $AI.

The length of time that unused data will remain on disk will have to be determined by experience. Initially it will be large, but it will quickly be reduced to 20 days, and then we will see how the system performs. The final figure will be chosen to give the best overall performance and will probably be somewhere between 7 and 20 days.

What happens to data in the M860 depends on its type. Data created with UNIT = TRANS will remain there for 30 days, and will be scratched without warning at some convenient time after that. Data created with UNIT = SECURE or UNIT = STORAGE which has been in the M860 for a long time, and is therefore unlikely to be accessed, will be moved to off-line storage. This data will be regarded as archived and will not be restored automatically. Before it can be used users will have to request that it be restored to disk, which is done by issuing the ASM2 command $RA. This will put a request on a queue, and a system program will be run periodically to execute the queued requests. Any program trying to access datasets before they have been restored will be accepted by the system, but will fail with a 213 abend if it executes.

The period data will remain in the M860 will probably be between 3 and 6 months. This will depend on how quickly the M860 fills up and how often data of this age is accessed.

When a dataset is restored to disk from either the M860 or the archive store, the copy of the dataset on cartridge will be scratched at some convenient time. The system does not guarantee that the copy will continue to exist, so users should not rely on it as a backup for the restored disk dataset.

Data will only be kept for one year in the off-line archive. After this it will be scratched. If users wish to retain an unused dataset beyond this time they will have to copy it to a private tape.

Data management will apply only to MVS disks. When MVT closes, the data on MVT disks will be consolidated onto disks which follow MVS naming conventions and will then be managed by ASM2. This transfer will have to be done gradually to avoid excessive archiving activity within a short period.

The data management system can only be guaranteed to work successfully if datasets are catalogued. Therefore all datasets which are uncatalogued or incorrectly catalogued will be scratched. Also datasets which remain empty for 4 days after creation will also be scratched.

The M860 will provide a significant increase in the data storage available to MVS. This will allow some data which is currently held on tape to be moved to disk and so reduce the turn round of jobs which use it. However, the capacity of the M860 is small compared to the total amount of data we process, and some care is necessary to ensure it does not get overloaded. Large data sets will present a problem for two reasons. They may cause congestion because of the time needed to copy them to and from the M860, and it may be difficult to find sufficient free space on a disk when restoring them. We recommend that datasets larger than 50 Mbytes would be better kept on tape. Collections of data which are large should also be kept on tape, even where individual datasets are less than 50 Mbytes.

Alan Mayhook, Systems Group, Central Computing Division

Joint GEC Users' and Managers' Meeting 5-6 September

This was the first of the new format Joint Users' and Managers' meetings and was held at RAL. 11 Users and 12 Site Managers attended.

The first afternoon was divided into a private Users' meeting and a Managers' and RAL meeting, the latter replacing the old GEC Managers' Meeting.

The main joint meeting, replacing the old GEC User Group Meeting, took place on the second day. These were the main points of interest:

The afternoon was taken up by two tutorials:

Brian Alston, GEC User Support, Informatics Division
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site