Contact us Heritage collections Image license terms
HOME ACL ACD C&A Literature Technology
Further reading: □ Overview □ 1981 □ JulyAugustSeptemberOctoberNovemberDecember □ 1982 □ JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember □ 1983 □ JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember □ Index of issues □ Index
INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
C&ALiteratureNewslettersFORUM
C&ALiteratureNewslettersFORUM
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1981
JulyAugustSeptemberOctoberNovemberDecember
1982
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
1983
JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember
Index of issues
Index

No 34 April 1983

Forum 21-41 Banner

Forum 21-41 Banner
Full image ⇗
© UKRI Science and Technology Facilities Council

1. CENTRAL COMPUTING REPRESENTATIVES' MEETING NOTES 22 MARCH 1983

OPERATIONS

Performance

Problems with the supply grid at AERE caused approximately 3 hours lost time on the night of 28 November.

There were problems with the Memorex 3652 disk drives when powering on after the Christmas shutdown which extended the down time of the MVT systems. Two Head Disk Assemblies (HDAs) had to be replaced and RHEL01 and RHEL05 were restored from backups taken prior to the shutdown.

Electric and the MVT batch systems were halted for 6 hours on the night of Saturday 26 February to enable operations to copy MVT and Electric disks onto a different string.

Interface control checks from the 1270s have caused several VM restarts and we are currently attempting to tie this down to a particular interface on one 1270.

There have been no other major system outages.

ATLAS 10 Delivery and Test Schedule

The ICL disk drives arrived on March 2nd, and were assembled and powered up on the 3rd. Preliminary tests were started on the 4th, and RAL designed acceptance tests commenced on the 7th. The ATLAS-10 is likely to arrive a little earlier than the agreed delivery date of May 31st. From June 1st until handover in September, RAL and ICL will share development and production time on the system according to a pre-arranged plan.

Card Punches and Card Punching Facilities

The IBM 2540 card reader/punch will be unavailable as of 1 April 1983. as announced previously. Another card reader (IBM 2501) will be available on the Central System but there will no longer be a punched card service. IBM 029 card punches will remain in service where required but those not in use will be removed to avoid unnecessary maintenance charges.

Maintenance

In view of the severe financial situation, we have cancelled the maintenance contract on the 3032.

This means that from 1 April 1983 maintenance is on a time-and-parts basis; the level of service should not noticeably deteriorate. The 3081 does not need maintenance breaks.

DPCE, a large computer maintenance firm, has won the contract for maintenance of the Memorex disks and communications equipment. They have a good reputation with British Airways and other large installations.

Air Conditioning Shutdowns

Owing to the spare capacity in the air conditioning plant since the 195s have been removed, we are now able to reduce the computer down-time attributable to air conditioning maintenance. Maintenance will remain bi-annual with Spring shutdowns from 1600hrs on a Friday, until 0800hrs on the following Monday; Autumn shutdowns are from 0800hrs on a Saturday until 2000hrs on Sunday.

SYSTEM SOFTWARE

ELECTRIC Run-down

As part of the run-down plan, no further archiving of ELECTRIC files will be allowed after 30th June 1983. Users should plan to restore all their archived files to the on-line filestore before ELECTRIC closes at the end of 1983. Some increases in space allocations may be possible to accommodate restored files.

All users who have transferred files to CMS should delete them from ELECTRIC. This can be done by Computing Division staff in cases where all files below the main directory can be deleted.

IBM Graphics Software News

There has recently been a considerable amount of work done to correct faults in - and to add facilities to - IMPACT, SMOG and DRAFT. New versions of all of these systems are now available under CMS and will be installed as the production systems when some final tests from the FR80 have been checked. The same system will be installed under MVT as soon as effort permits.

IMPACT

This has been modified to support (or correct support for):

SMOG

Faults in this have now been corrected in the following areas:

DRAFT

This has had a vast amount of work done on it; amongst the facilities improved or added are:

Work is progressing on schedule and it is hoped to make a progress report at the June meeting.

VM AND CMS

CMSELEC

Version 2.0 allows the FIND and LIST commands (syntax as for ELECTRIC) to be used to access MUGWUMP picture files. This facility does not depend on ELECTRIC and will still be available after ELECTRIC closure at least until the end of the MVT service. However, because CMS has only read access to OS datasets, FIND does not update the last access date and it has not been possible to implement the SCRATCH command. These effects can be achieved by using FIND and SCRATCH with the utility program ELFR80 which is now available in OSUTIL. See the on-line help system for full details. In addition, the expiry period of these files has been increased from 7 to 28 days.

Routing graphics output from an MVT job to the CMS graphics filestore will not be implemented, though this facility will be provided in MVS.

VM/SP Release 2

The CMS part of VM/SP release 2 was installed on 17th February. A summary of new commands and other changes including the new mail facilities was contained in the last meeting's notes and published in the December 1982 issue of FORUM. Release 2 manuals are available and are being distributed. In the meantime see the on-line help system for more details. The RAL documentation will be available in the late spring.

Release 2 contains a new SENDFILE command which is incompatible with the former RAL command . Therefore a new RAL command GIVEFILE (identical ir operation to the RAL SENDFILE) has been installed and should be used from now on. The old r.'l command has been copied to the OLDSOFT dis.- :rr HELP CMS OLDSOFT).

CMS Batch Monitor

The IBM batch monitor will be installed as soon as work on interfacing it to the AXEMAN system has been completed. The batch monitor schedules a CMS batch job to run in one of a number of slave batch virtual machines which are defined to accept jobs of a particular class dependent on CPU, core requirements or time of day.

Output Routing

The following changes have been made routing systems used by the VPRINT/VPUNCH, SUBMIT, XPLANT and VROUTE.

The VROUTE EXEC now includes all the functions of the old OSROUTE EXEC. It will also define suitable routing to enable users on HASP workstations to use VPRINT/VPUNCH commands - these will run NPRINT/NPUNCH when routing is defined to a HASP workstation.

VROUTE may also be used to define submission routing of jobs to another site - e.g. to define-that all jobs submitted by SUBMIT and XPLANT should be run at CERN instead of on the RAL OS/MVT system. Note that this cannot be used with default output routing of OS output via the VNET tags - that facility is only provided on the RAL OS/MVT system.

VROUTE also provides the means to set all the routings (printer, punch and RAL OS/MVT output) to the same location with one call to VROUTE.

The OSROUTE EXEC is now redundant and users should convert their PROFILE EXECs to use the new VROUTE form. This change has been made in the STANPROF EXEC.

FREEDA

The action taken by FREEDA if its disk becomes full has been changed from returning the oldest files to users' readers to putting them into the CMS ARCHIVE. It has therefore also become possible for users to request that their files should be archived rather than deleted or returned at the end of their time on FFEEDA's disk. However, if this is done then delays in retrieving them from the archive will probably be quite long. See FREEDA's help menu and files for details of the new options.

OSUTIL

Several new utilities have been added to OSUTIL: DSRENAME, for renaming an OS data set; ELFR80, for sending Electric picture files to the FR80; DSCOPY, for copying a sequential data set without having to set up any JCL beforehand; TDMS, for executing TDMS commands.

The command syntax has been altered to make it more consistent with other CMS commands. A repeat option has been provided which allows a utility to be executed several times in the same job.

Use of Batch Services by CMS Users.

Although CMS runs in the same machine as MVT and MVS, the communication between the systems is complicated. Despite the fact that programs may be compiled and partially tested under CMS, the inability of MVT batch to access CMS files means that it is necessary, for example, to duplicate data and programs. It has also been stated that the absence of general provision for CMS programs to access data on magnetic tape and other restrictions limits the amount of testing that can be carried out under CMS.

Some consideration has been given to these and related problems with a view to creating a more useful user environment. When CMS was first introduced at RAL it was chiefly seen as a replacement for ELECTRIC. Running on the 3032 it was considered that too much interactive work, especially graphics, would overload the system. This was at a time when ELECTRIC response was giving great cause for concern. This was also responsible for limitations being placed on the permitted sizes of CMS user machines. The constraints on MVT region size often meant that limiting CMS machine size was not important. Larger programs could be run in CMSBATCH but not only was this more awkward but it lacks mechanisms to control work load. Work sent to CMSBATCH could be returned in seconds or hours with no predictability.

For reasons such as these, MVT batch is used for complicated testing, and for small, medium and large scale production work.

There seem to be two ways of reconciling the problem of having to work in two environments on the same machine. On the one hand, access from CMS to data stored under MVT can be made easier for users. It may prove possible to provide some better EXECs and more guidance to users. Attempts will be made to do this.

On the other hand, it is recognised that there are many applications whose needs could be totally met within CMS. The basic facilities are mostly there. The chief problem would be of organising the necessary resources such as disk space and paging areas to allow larger programs to run. During 1983 the amount of disk space available to the total system will reach the point where much data currently accessed from tape can be brought onto disk. This should enable CMS programs to be able to access such data without the need for tape. The new CMS Batch Monitor will also provide better control over its workload: this should prove a more useful tool than CMSBATCH.

The possibility of providing more scope under CMS is under study, together with examination of necessary additional facilities for ease of use.

It should be noted that these comments on CMS service in no way affect the traditional means of handling large scale production work, which should be carried out on MVT (and MVS when available). This is to allow existing batch users to continue current methods of working, and for compatibility with DL, CERN and DESY.

FORTRAN

Fortran H Extended Compiler

This compiler (release 2.3.0) replaced the Fortran H Extended Plus Compiler on both MVT and CMS on 1/12/82 as previously announced. The Fortran H Extended Plus compiler was finally removed from all public disks on 1/2/83. During December a bug was discovered in the H Extended Compiler with the handling of LOGICAL*! variables in a DO loop. This bug was resolved by applying the latest available unofficial IBM PTF (Program Temporary Fix) to the compiler. This brought the release up to 2.3.6 and was done on Wednesday 12th January 1983 on both MVT and CMS.

One problem which occurs very occasionally with the H Extended Compiler when running under MVT still remains, this is that the compiler abends with a OC5 during compilation but on a subsequent rerun successful compilation is achieved. This problem was noted with the previous compiler and its frequency does not appear to have increased. As it is a very rare problem and attempts to locate it have so far failed it is most unlikely that it will be fixed in the MVT version (it only occurs in MVT).

Fortran Library

The Fortran Mod II library has remained unchanged during this period (except for the bug fix announced at the last Reps Meeting). The ENDFILE bug in the MVT library remains unsolved. It is still impossible for an ENDFILE statement to prematurely terminate an input dataset.

VS Fortran

Version 1.2.0 (release 2.0) of the VS Fortran Compiler and Library was installed on CMS on Wednesday 12th January 1983. The library is also available on MVT (SYS1. VFORTLIB) . Work has Started on attempting to run the compiler under MVT though so far this has proved unsuccessful. This version is proving a very stable release and is much improved over earlier releases of this compiler so we do encourage users to move to this compiler where possible. Details of new facilities in this release are in NEWS FORTRAN on CMS.

Users are reminded that they should always compare results of a pre-production run of a job with those from a different compiler or the same compiler at OPT(0) before going into full production. Suspected compiler bugs should be reported to PAO.

MVT

Model DCBs

The model DCBs on the MVT system were changed on 16 February so that any reference to LRECL=X was changed to LRECL=32760. This change was to allow copying of data sets from MVT to CMS for the following DCBs: ONEKB, THREEKB, TRACK30, TRACK50 and VBS.

New Disks

A number of additional 3350 equivalent disks are about to be made available to the MVT system. Initially there will be three, but the number will be increased as further disks pass acceptance tests. It is not intended to use any formal mechanism for controlling the use of this facility. The availability of space will be monitored by a newly appointed Data Manager. Data in active use may remain ot, the disks indefinitely. When data sets fall into disuse they will be removed. Except in special circumstances, this will not occur until 30 days after the last use. This period will be reviewed by the Data Manager and amended if necessary. Backup copies will not be taken. It is the user's responsibility to arrange appropriate archiving. This facility is particularly aimed at reducing the number of tape mounts. As the facility expands, measures may be introduced to limit tape mounts. For example the MVS AU will include tape mounting charges.

Users will not refer to these disks by volume. The system will select one of the disks when the user JCL specifies UNIT=STORAGE. Data sets must be created with DISP= (NEW, CATLG) and further accesses should be made via the system catalog based on DSN.

A similar system will operate in the MVS service but there will be a system of access control.

Data Set Naming Conventions

All data sets on the new disks will be catalogued.

The high-level index (primary component of the DSN) will identify the group of users to whom the data set belongs. This 'identifier' must be registered with PAD so that; it can be created as a catalogued node before any data jets using it may be created. This is a continuation of an existing scheme, whereby nodes such as NA7 and EDIN are already in use.

The use of the node 'USER.' is being discouraged.

WORKSTATIONS AND TELECOMMUNICATIONS

VNET Conversion programme

Further workstations have been added to VNET, namely ZIGA (Imperial College Mechanical Engineering, Remote 90), DLNSF (Daresbury, Remote 88), and RLGB (Remote 47). These are all GEC 4000 machines .

Some more VNET problems nave been solved. In particular the ones responsible for delaying the conversion of GEC 1000 machines have now been cured which allows the conversion programme to advance further. CMS users may examine the current status of VNET problems by typing MEWS VNET PROBLEMS. A new manual is in preparation. It deals with VNET only. Machine dependent documents will be issued separately.

It is beginning to appear that access to the SERC network from non-networked workstations may not be required. If it is true that users have other means of accessing CMS, then the provision of access via VNET will not be implemented. Users affected by such a decision are requested to contact P J Hemmings, User Interface Group, urgently.

GEC 2050 Replacements

A number of replacement machines have recently been commissioned, and include the following:

Site remote link-id replacement
Edin.Physics 29 EDINBRGH GEC4085
Westfield Coll. 13 WESTFLD GEC4085
CERN 10 CERN GEC4160
DESY 64 DESY GEC4160

These machines are all workstations providing terminal support and Remote Job Entry facilities to RAL, and in some cases to Daresbury.

Services as an alternative to the GEC2050 have been developed on a number of other sites. Some are available, as follows:

2050 Site Rm Alternative
system
Rm Link-id status
Sussex 34 ICF Prime 37 RLPRIME1
Surrey 34 ICF Prime 37 RLPRIME1 OK
Bristol 16 ICF 4090 69 BRGA
Bangor 90 Bangor DEC-10 33 BANGFTP OK
Leicester 28 Leic Poly GEC4090 28 LEICS OK

JNT PADS

The Computing Division has purchased 20 Camtec PADS designed in accordance with the JNT. A 'PAD' is a new type of device for allowing simple connection to an X25 network. We have just received (in pre-release form) the 2nd production level of software for the PADs; this software is now undergoing trials. Initially, the PAD had a number of bugs for which patches were required, and a number of implementation limitations. The full specification provides for down-line loading of software and configuration tables, and for printer support. Neither of these was available on the first release, which meant that system loading required a cassette recorder. The printer support is available with the new release, which we are testing.

There have been some problems using PAD connections to certain machine types, notably VAXs and certain Primes. Essentially, the problems are in the area of Network terminal protocols and their implementations on various hosts. The problems are being identified and solutions found. Users connected to PADs will be kept fully informed of developments in this area. It does mean, however, that in some cases it may be necessary to set up your terminal-host connection parameters. In the vast majority of cases default values will enable standard working practices to be almost unchanged.

Access to PSS and Other Networks

Users who have need to access machines not connected to SERCNET may find that these machines are supported on another network. This may be the British Telecom's Packet Switching Network (PSS), or other national or international networks connected to PSS. In general, it is possible for users who are supported by SERC to gain access via the PSS gateway at RAL. Because this is a paying service, arrangements to cover the costs have to be made to the satisfaction of the Resource Management Section of Operations Group, who can issue the necessary account, password and documentation.

NJE Link To CERN

A Networked Job Entry (NJE) link connects the RSCS (VNET) machine at RAL and the IBM complex at CERN.

This link provides the following facilities:

  1. Job submission from CMS or MVT to the CERN IBM system.
  2. Job submission from Wylbur to the MVT system at RAL.
  3. Job output (print or punch) from a CERN job to a CMS machine.
  4. Issuing a command from CMS to interrogate JES2 at CERN.
  5. File transfers between CMS or OS disk and IBM datasets at CERN.

Further facilities are planned but they will require more testing/development.

a., c., d. and e. were described in the notes for the last meeting.

Job Submission to CERN from CMS

The SUBMIT or XPLANT execs may now be used with the SUBROUTE option to submit CMS files to the CERN IBM system for execution.

The calling sequence is :-

SUBMIT < fn  <  ft  <   fm  >>>   (   SUBRoute  GEN JOB       *

Alternatively the default target for job submission may be set by using the VROUTE exec.

VROUTE Submit GEN

After this exec is called, jobs submitted using SUBMIT, XPLANT or PLANT will be sent via the GEN link to CERN. Individual jobs can be sent to a different node by using the SUBROUTE option of SUBMIT or XPLANT. For more information see the CMS HELP system for VROUTE, SUBMIT, XPLANT and ROUTING.

Jobs submitted to CERN no longer need to specify the procedure library name.

Job Submission to MVT at RAL from CERN

To submit the Wylbur active file to the MVT system at RAL, use the RUN command.

RUN  ON RLVM370.FEM

Note that the default routing for output from such a job will be to the LOCAL printers at RAL. Any other routing required must be specified by a /*ROUTE HASP control card. The Wylbur default routing information is not passed to RAL.

Users should note that a job submitted via a NJE link remembers its originating node. Attempts to route the output of a job should take this into account. For example, a job submitted to CERN from RAL via the NJE link that wished to print on CERN's REMOTE2 printer (ie the B2B) would require a JES2 control card.

/*ROUTE   PRINT  GEN.REMOTE2

whereas the same job submitted at CERN would only require

/*ROUTE  PRINT  REMOTE2

Failure to include the node will result in the print file returning over the NJE link and being sent by VNET to its REMOTE2 (ie OXFORD) workstation.

LIBRARIES AND PACKAGES

NAGLIB

The NAG Mark 1 graphical supplement has been installed on MVT and CMS. It consists of a new chapter of routines, J06, for drawing graphs, histograms, contour plots and 3D plots. Online writeups are available for all of the routines. See 'NEWS NAG' for more details.

The NAG help system has been installed on CMS and may be accessed via the NAGHELP command. See 'NEWS NAG' for more details.

The NAG example programs and data (including those for the graphical supplement) are now held on the minidisk PROGLIB 198 (no read password needed). Users are free to take copies of the programs and modify them as they wish.

RHELIB

Four bit-manipulation routines have been added to RHELIB on MVT and CMS. Type 'HELP RHELIB USETB' for more details.

CERNLIB

The routines UZERO, UBLANK and UFILL now cause a user abend (code 333) if called with 2, 2 or 3 arguments respectively. They will be replaced by the standard CERN versions during May.

2. MACHINE COMPARISONS

The following is an update to the article published in Forum 18.

We are often asked by potential grant holders how to convert time on their local machine to 360/195 hours. As an aid to conversion, the following figures are provided as a guide to the amount of processor time required for batch jobs. The ratios are derived from Central Computer and Telecommunications Agency (CCTA) synthetic benchmarks for single precision FORTRAN, manufacturers MIP (millions of instructions per second) figures and other relativities, each representing some rather artificial or arbitrary measurement conditions; hence, wide variations will be found from application to application. The figures should not be used for general comparisons of the different systems.

On examining the figures, you may find from your own performance information somewhat different ratios, especially as the numbers given are relative to the 360/195 which has a fairly wide performance range. We would be interested to learn of any such information, so that we might update our own tables.

SYSTEM × 360/195
IBM 360/195 1.00
IBM 4331-1 0.03
IBM 4341-1 0.21
IBM 3033U 1.17
IBM 3081 (1 processor) 1.26
AMDAHL 470/V8 1.33
BURROUGHS B 7750 0.15
CDC CYBER 170-20 0.18
CDC 7600 2.15
CDC CYBER 205 (2 PIPE) SCALAR 2.51
CDC CYBER 205 (2 PIPE) (VECTOR LENGTH 1000) 44.00
CRAY 1 (SCALAR) 3.52
CRAY 1 (VECTOR LENGTH 64+) 20.00
CTL MODULAR 1 .01
DATA GENERAL ECLIPSE S/200 0.11
DEC KL10 0.25
DEC PDP 11/45 (FPP) 0.04
DEC VAX 11/780 (FPA) 0.24
GEC 4080 0.05
HARRIS S 500 (SAU) 0.17
HP 3000-III 0.04
HONEYWELL 6/43 (SIP) 0.05
HONEYWELL 6080 0.16
ICL 1904S 0.04
ICL 2960 (VME/B) 0.08
ICL 2980 (FMDU) 0.72
ICL SYSTEM 4/72 0.06
MODCOMP CLASSIC 0.20
NORD 50 0.11
PERKIN ELMER 3220 0.11
PRIME 400 0.09
PRIME 750 0.21
SEL 32/77 (HW FL PT) 0.12
SIEMENS 7748 0.06
SYSTIME 8000 0.24
UNIVAC 1108 0.17
Jed Brown - User Interface Group

3. XPLANT

A Rutherford report RL-83-023 entitled 'XPLANT for ELECTRIC users' is now available. Its aim is to help users familiar with the ELECTRIC system of delayed editing to use the XPLANT system in CMS.

It introduces the XPLANT command and a few of the XPLANT macros and then shows a user how to use these to reproduce the effects of each of ELECTRIC's delayed editing commands.

It also shows a few of the many additional features that make XPLANT such a powerful system.

Copies of this report may be obtained from the Documentation Officer on ext 5272.

J C Gordon - User Support Group

4. FURTHER NOTES ON JNT MAIL

In an article in the previous issue of FORUM the overall operation of the new MAIL protocol was outlined, together with some implementation details for MAIL on the GEC systems. This article contains some further details for users of the system describing where a recipient's mail box exists.

The MAIL protocol supports the concept of 'relaying' mail through intermediate sites, analogous to sorting offices. This is useful when sending mail to users who do not have a direct connection to SERCNET, but may be contacted via a chain of relay sites, the last of which is connected to SERCNET. Examples of this kind of situation are the sites connected to Campus networks, PSS or in the USA connected to ARPAnet.

As an example, consider a user of the ICL 2972 at the Edinburgh Regional Computer Centre (ERCC). The mailbox name is "J.Mills", and the machine name is "2972". The ICL machine is connected to the RCOnet at Edinburgh, which has connections to the DEC-system 10 at ERCC which may be accessed from SERCnet.

Users on the DECsystem 10 can address the user as J.Millsg@CO. Users of other machines on SERCnet can request the DEC-10 to relay the mail to them by using the address J.Mills%RCO@EDXA.ERCC.FTP. This means: "Transfer this mail to EDXA.ERCC.FTP (the site after the @ symbol) and then deliver that to J.Mills@RCO". Note the transformation of the last % address symbol to an @ symbol. The relaying may be via more than one host, as in the example J.Mills%RCO%EDXA.ERCC.FTP@RLGB which is sent first to RLGB, then the ERCC DEC-10 and finally RCO for delivery. This relaying mechanism enables far flung routes to be explicitly traced out by the sender.

Most users of mail will be relieved to hear that the majority of sites do not need relay specifications.

Note however that SERC does not undertake 'third party' relaying, (e.g. between two external networks) unless authorisation is specified for the PSS address.

If a user receives mail, every relay system that has handled the message "post marks" the header as it passes through. The postmark includes the system name, and time and date. This mechanism guarantees a return path for replies (with the possible exception of gateways that require authorisation, if return authorisation has not been specified).

This is the state of the implementations of MAIL (at April 1983) on the following machines/operating systems:

GEC 4000/OS4000 (RAL and DL)

Full relay and user interface system (and some protocol conversion ability)

PRIME (RAL)

Implementation being worked on. LPOST command can send outgoing mail. A subset of JNT mail is supported (e.g. usernames must be used, not real names)

DEC-10 (ERCC, YORK)

MAIL relay operational, incoming mail being input into the "TELL" system (and can not use the "REPLY" command. The POST program can be used to send outgoing MAIL.

VAX/VMS (UWIST software)

Subset implementation; Only one recipient may be specified. Incoming mail entered into the VAX "MAIL" system, but the REPLY command can not be used. Separate POST program to send outgoing mail.

PDP-11 UNIX (RAL, UCL, York)

Various UNIX user interface programs available. Relaying operational.

IBM VM at RAL (CMS)

MAIL to the IBM should be relayed via a GEC system (e.g. RLGB) until the CMS MAIL system is ready.

Note that for the systems that relay via the GEC, protocol conversion takes place and this means that the mail must be sent to a username in the interim period until the full implementation is ready.

Telecomms messages

Users with messages for the Rutherford Telecomms office may send mail to TelecommsgRLGB where it will be printed in the office. This facility may be used to keep the operations staff informed of developments or actions concerning terminal or machine faults in the field.

Jonathan Mills - Systems Group

6. GEC SUPPORT

The User Interface Group at RAL maintains a central advisory service for a range of computers, which include IBM, PRIME and GEC machines. The GEC section currently supports 17 GEC Multi-User Minis (MUMs) and 8 GEC Workstations. These GEC 1000 series machines are located around the country, mostly in Universities. Two exceptions are recently-installed workstations at CERN and DESY. They are networked through SERCNET and run under a RAL-enhanced OStOOO Operating System.

Each site has the capability of job-submission to the central IBM MVT batch service; job output destinations include GEC site printers and MUM filestore. MUM users have access to a common pool of supported programming software in addition to locally developed, often specialised software. Updates to centrally-supported software, including OS'4000, are distributed over SERCNET from s development GEC 4090 at RAL.

The GEC MUMs and Workstations have site managers, who provides operator cover and local support. Problems can arise which are beyond their expertise. These are passed on to central support, where we are responsible for recording and replying to a wide range of queries. Visits to our GEC User Support are infrequent owing to the geographical remoteness of most GEC sites. The majority of problems are handled through the JNT MAIL facilities; urgent problems are generally reported by telephone.

A Mail-file for SUPPORT is present on RLGB, the RAL GEC 4090. We examine this file daily, and recommend it for submission of queries and information.

The problems which we accept do not relate purely to programming or the operating system. The SUPPORT file is very much a clearing-house for actions on members of all groups in the Computing Division. We forward many queries or messages via MAIL; some are printed and circulated as memos. During the twelve-month period to February 1983, we received 1560 SUPPORT entries. About 900 of these would have been processed directly, the remainder being forwarded.

If you have any problems or queries relating to the service offered by the GEC MUMs or Workstations, you may present these in any of the following ways:

  1. MAIL to SUPPORT on RLGB
  2. Telephone Kevin Duffey or Stephen Millmore at RAL on extn. 6252
  3. Post details to:
    GEC User Support
    User Interface Group
    Computing Division
    Building R27
    Rutherford Appleton Laboratory
    Chilton
    Didcot
    OXON 0X11 OQX
  4. Visit our support staff at the above address.
Kevin Duffey - User Interface Group
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site