Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF CCD Mainframes Super-computers Graphics Networking Bryant Archive Data Literature
Further reading □ OverviewIssue 1: December 1986Issue 2: January 1987Issue 3: April 1987Issue 4: June 1987Issue 5: August 1987Issue 6: October 1987Issue 7: December 1987Issue 8: February 1988Issue 9: April 1988Issue 10: June 1988Issue 11: August 1988Issue 12: October 1988Issue 13: December 1988Index of issues
CISD Archives Contact us Heritage archives Image license terms

Search

   
CCDLiteratureNewslettersARCLIGHT
CCDLiteratureNewslettersARCLIGHT
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
Issue 1: December 1986
Issue 2: January 1987
Issue 3: April 1987
Issue 4: June 1987
Issue 5: August 1987
Issue 6: October 1987
Issue 7: December 1987
Issue 8: February 1988
Issue 9: April 1988
Issue 10: June 1988
Issue 11: August 1988
Issue 12: October 1988
Issue 13: December 1988
Index of issues

Issue 2: January 1987

Initial Information about the Cray X-MP/48 service

Access to the Cray X-MP/48

We describe the methods of access planned for the Atlas Centre Cray, the ways of routing output back to your preferred destination and the terminal access available. At the time of writing, none of them have actually been tested but they are listed (within each section) in the order in which we propose implementing them.

Job Submission

Four methods of job submission are planned.

Output Retrieval

There are three distinct ways of routing output from Cray jobs back to your preferred printer or computer.

Terminal Access

Initially there will be no on-line terminal access to COS on the Cray. Users logged in to VM/CMS at RAL will be able to query the status of jobs using the VM station commands. They will also be able to move datasets from CMS to the Cray and to cancel Cray jobs.

Accounting, Rationing and Control

The rations and accounts for the Cray will be based on the same system which currently manages CMS rations and accounts. (MVS will be incorporated at a future time.)

Usernames are common between VM/CMS, MVS and the Cray, though a user will not necessarily be registered in all three systems. The accounting tree structure of subprojects, accounts, categories and major groupings, such as Boards, applies to both CMS and the Cray. The set of valid username and subproject pairings is also common between the systems.

The Cray ration and usage counters are separate from the CMS ones, so each node in the tree structure can be given rations for the two systems independently. The Cray counters are called "COS-AU", just as the CMS counters are called "CMS-AU". In particular, a subproject can have Cray rations without any CMS rations, or vice versa.

Each node in the tree has three pairs of usage and ration counters, for "This Week", "This Year" and "In Total". When a Cray job ends, the units it has used are added to the usage counters at the sub-project node and all higher nodes. When a job is submitted, usage counters are compared with radons at the subproject node and each higher node, and if any are exceeded the job is returned unrun.

All rations and limits (CMS and Cray) and account authorisation information can be queried (and altered by authorised representatives) from both CMS sessions and Cray jobs.

For example from CMS. a user might query Cray allocations for subproject XYZ with the command:

ACCT  QUSAGE XYZ  COS-AU

and possibly get the reply -

XYZ    This Week   This Year    Total
Limit   225.000     5850.000  Unlimited
Used      8.501      369.887    369.887
In Queue  0.000

To list Cray jobs run recently for user ABC on sub-project XYZ the CMS command would be:

ACCT CRAY JOBS ABC XYZ

A Cray job querying Cray allocations for subproject XYZ and also listing the users allowed to use the subproject would be:

JOB,JN=ABC1,US=ABC.
ACCOUNT,AC=XYZ,UPW=password.
ACCT.
/EOF
QUSAGE XYZ COS-AU
QCOMB ACCT XYZ
/EOJ

Cray Dataset Management

The Cray has 14 Gbytes of online disk. The Cray Operating System, COS, handles temporary datasets and permanent datasets. Temporary datasets are private to a job, and last only until the job ends. Permanent datasets (PDSs) are created explicitly. They are identified uniquely by names with 4 components, the "owner" (usually the username of the job which created the dataset), "pdn" (15 character permanent dataset name), "id" (8 character field) and "edition" (integer, up to 4095).

Three levels of access control for permanent datasets are available. For each dataset, there is firstly, the type of access allowed to the dataset to anyone (by default, read only); secondly, the type of access allowed to each named user in a list of users; thirdly, the access allowed to the owner of the dataset (by default, all access).

In the initial service, user magnetic tapes can be read and written by the CMS batch system, and the data transferred between COS and CMS. Ultimately, it will be possible for users to read and write their tapes directly from Cray programs. Such tapes will be known to the same tape management system as those in CMS and MVS, and similar listing, management and security facilities will be available in all three systems.

Initially, a basic disk dataset management system will be run. However, a more powerful data management system will be implemented soon. Details of both are as follows:

The initial system is based on "retention periods". When a PDS is created, the user specifies a retention period of up to 28 days (default 14 days). The dataset will be deleted automatically when the retention period has expired, measured from the day the dataset was last used (ACCESSed). Users requiring large permanent datasets (exceeding 100 Mbytes) or a large total filespace (exceeding 1 Gbyte) should contact User Support.

The more powerful system will be based on the COS facility for automatic archiving and transparent retrieval of datasets between disks and a backing store. Users will be able to specify much larger retention periods. After a period of inactivity, a dataset will automatically be deleted from disk (but remain in the backing store and in the catalogue). Next time the dataset is ACCESSed, it will be transparently restored to disk. (Datasets will also be restored in this way after a disk failure). The backing store may be tape reels or the Masstor M860 cartridge device which is quicker and more automatic. Users need not be aware of the actual location of their datasets at any particular time. Eventually, users will be given maximum and average space allocations (like the current CMS system), which will include all permanent datasets owned by them, whether on disk or not.

Cray Graphics Facilities

When the decision was taken to install the Cray X-MP/48 at the Atlas Centre, work started to provide graphics facilities for users of the machine. This section summarises the methods of working that are envisaged, the software that will be provided with the initial service and the hardware and software that are being evaluated for future extensions to the service.

Overview of Facilities Envisaged

For both central and distributed sites, the possible ways of working are grouped into three main areas:

For HARDCOPY and VIEWING, the output from the Cray will be routed to a central or distributed device or filestore. For INTERACTION, the user will have a real-time connection to the program running within the Cray.

Central Hardcopy Facilities

The Atlas Centre already possesses a number of hardcopy devices. These will all be accessible, at least from GKS, in the initial graphics service.

We intend to develop a system for the direct output of graphics to video tape. This would be able to produce open-reel masters or cassette copies in all common formats (VHS, Beta, U-matic, Video-8).

It is doubtful whether, with such a video system available, the demand for film (16mm or 35mm) output would be sufficient to justify a facility for the production of cine film output at the Atlas Centre when there is already a suitable facility at ULCC. There is still likely to be a requirement for single frame (slide) output.

Viewing Facilities

The initial viewing service will, inevitably, use existing graphics devices. Files from the Cray will be routed to existing computer filestores and viewed from there. A centrally supported viewing program for metafiles produced by GKS will be provided, avoiding the need for duplication of effort in this area.

As funds are made available, the viewing facilities will be extended with the powerful graphics workstations recommended in the Forty Report. After a detailed survey of such devices, the general conclusion is that the best solution is not just a highpower device but an autonomous computer system that has powerful graphics facilities. At the top end of the price range this includes systems like the Sun 3, Silicon Graphics IRIS and Hewlett-Packard 320 SRX. There are also less expensive systems of the same overall structure that could be considered for acquisition in larger numbers for use at distributed sites. These include reduced power versions of some of the above and even systems based on IBM PC/XT/AT/RT machines with attached graphics capabilities.

The attraction of this solution is that the user is able to proceed with other work while results are transferred back to his system and that display of results is possible at very high speeds because all operations are then local, not involving a front-end of the network.

Interactive Facilities

While interaction is possible under the COS operating system, the way it is done is radically different under UNICOS. At present therefore, planning for interactive graphics has mostly involved reviewing the possible graphics devices for suitability. The use of (Unix-based) autonomous systems simplifies the connection to UNICOS since there are agreed protocols for communication.

Initial Software

The initial graphics service will include:

If there is sufficient demand, we can also mount GINO-SURF; Mark II of the NAG system provides substantially the same range of facilities.

Future Plans

It is recognised that the sort of problems that will be tackled on the Cray will require graphics facilities of the highest quality, including high-resolution rendering of 3D, wire-frame and solid modelling, simulated lighting and animation. There are countless software systems that provide some, most or all of these. In addition, the type of workstation being considered for viewing can typically achieve most of these functions in hardware. These workstations can drive hardcopy devices, possibly even the video system. We are therefore evaluating the appropriate combination of hardware and software that could provide high quality rendering where it is needed.

Systems Software on the Cray X-MP

The Cray X-MP/48 will initially run the following systems software: the Cray Operating System (COS) Version 1.15 (not 1.14 as stated in the last newsletter); Version 1.15 of the Library and Product Set; the Fortran compiler CFT 1.15; Pascal 3.00 and C 1.00. This article gives brief details of some of the major enhancements in these releases. Until the exact configuration is defined and checked, we cannot guarantee the inclusion of all these releases.

Cray Operating System COS 1.15

Support for the IBM 3480 high-speed cartridge tape subsystem, which has been ordered but not yet delivered.

An archiving facility comprising an automated backup and space management facility for COS permanent datasets to Cray on-line tapes.

CPU targeting allows programs to be developed which are destined to run on a different type of Cray mainframe. This feature is now available via a new TARGET control statement, a compiler option or an appropriate product control statement.

Many I/O enhancements including the queuing of multiple I/O requests to the same dataset, user control of the size and occurrence of physical transfers and better error processing. These should lead to fewer I/O requests and less time spent waiting for I/O and for execution.

Version 1.15 Library

Data records specified with Fortran formatted I/O can now exceed the previous limit of 152 characters. The data storage area is now in common blocks which allows the user to define a larger area at load time.

Fortran callable tape positioning routines are provided. There is a feature which provides data conversion to and from VAX/VMS data types and formats for VAX/VMS tapes used on-line on the Cray.

Version 1.15 Product Set

Cray Assembler Language (CAL 2.0) has been re-written in Pascal and will replace CAL 1.0 eventually. It runs on the Cray 2 as well as the Cray 1 and X-MP and has many user requested enhancements. However, CAL 1.0 remains the default assembler on the X-MP. Most codes that assemble under CAL 1.0 should assemble without error under CAL 2.0.

New versions of DEBUG and SYMDEBUG have been developed as a first stage in providing an integrated set of debugging tools for Cray programmers. They have the same interface as before and only minor features have been added but they now support debugging of CFT77 and Pascal code.

Some changes have been made to SEGLDR including the addition of a load-and-go feature.

The SORT package can be invoked from JCL rather than just from Fortran callable subroutines as previously.

Fortran Compiler CFT 1.15

Some inconsistencies in the handling of Boolean operands have been removed. For example, arithmetic with two Boolean operands always produces a Boolean result and arithmetic expressions with Boolean operands are always evaluated from left to right within a precedence level. This can lead to different answers being produced but a warning message is issued in these cases.

There are several enhancements to DO loop processing. More information is given about which loops vectorised, which did not and why. This option should help users with program optimisation but it does increase compilation times by 10% - 15% although execution times are unaffected. Constant Increment Integers (CIIs) have been generalised to Constant Increment Variables (CIVs) allowing CFT to vectorise more loops.

Other features which now produce vector rather than scalar code are simple branches out of loops, a conditional block of code ending with an unconditional branch out of a loop, loops containing the MAX, MIN, AND, OR operators and subsequent reference loops such as:

      DO 10 I=1,N
      A(I)=C(I)
   10 B(I)=A(I+K)

There is an option for initialising the user's stack. This helps in debugging multitasking programs by identifying variables which are referenced before being initialised.

Separate blocks are generated for program instructions and data. Very large programs will benefit from having code and data separated so that all code blocks are loaded into lower memory while data blocks may extend into upper memory. This enables more memory to be used for local data.

PASCAL 3.00

Debug symbol tables are generated so that Pascal programs can be debugged with the symbolic debugging tools DEBUG and SYMDEBUG.

Multitasking support has been provided allowing Pascal programs to be used in a multitasking environment, optionally linked with Fortran and CAL routines. Support of task common blocks also facilitates communication with Fortran routines.

Automatic vectorisation of FOR loops and array processing language extensions are provided, improving the performance of Pascal programs which manipulate arrays.

C Compiler 1.00

This is a new product based on the AT & T portable C compiler

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site