Contact us Heritage collections Image license terms
HOME ACL ACD C&A Literature Technology
Further reading: □ Overview □ 1981 □ JulyAugustSeptemberOctoberNovemberDecember □ 1982 □ JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember □ 1983 □ JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember □ Index of issues □ Index
INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
C&ALiteratureNewslettersFORUM
C&ALiteratureNewslettersFORUM
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1981
JulyAugustSeptemberOctoberNovemberDecember
1982
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
1983
JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember
Index of issues
Index

No 20. February 1982

Forum 13-20 Banner

Forum 13-20 Banner
Full image ⇗
© UKRI Science and Technology Facilities Council

2. CENTRAL COMPUTER REPLACEMENT

The SERC Council, at its meeting on 21 January 1982, agreed to the purchase of a 16 Mbyte, 16 channel IBM 3081D to be installed at Rutherford Appleton Laboratory (see January 1982 issue of FORUM). Delivery will be in June. Capital funds, allocated to Central Computing in future years, have been brought forward to reduce the very high maintenance and energy costs of the 360/195s.

It is proposed that one 360/195 is removed when the 3081 is delivered and the other one removed a few months later. (The vast amount of space freed in this way will be further supplemented by the removal of the Univac 1108 which RAL currently runs on behalf of NERC).

After acceptance, the 3081 will simply replace one 360/195. It is planned after a month to swop the 3081 and the 3032 front-end. The 3081 will then run an enlarged front-end CMS service and the 'FEM' (the front-end MVT system) will also be able to run much more batch work than at present. It is because the front-end facilities are subject to the greatest pressure that this tactic is being adopted.

MVT 360/195 MVT 3032 MVT (FEM) CMS VM 3081

Figure 1

In September, by which time the 3081 should be tuned to its new role, the rear-end 360/195 should be removed. There will then be a net loss of bsteh capacity of about 1500 - 2000 hours per year. However, some capital funds remain allocated to Central Computing in financial years 83/84 and 84/85 and a continuation of the replacement plan is expected.

None of these manoeuvres should have any effect on the user's view of the system other than a change in service levels. The 3081 is a modern machine and gives the opportunity to move on to a supported operating system, thus putting the RAL Central Computing on a very stable basis.

3. USE OF AUS IN CMS

The following is a brief description of the way in which AUs are calculated and how they are used in a normal terminal session. The algorithm currently in use for calculating AUs from the resources used is:

AUs = K C(t) [A × cpu secs + B × connect secs + C × spooled i/o + D × non-spooled i/o]

where:

A = 1.00000, B = 0.00060, C = 0.00035, D = 0.00800, K = 0.00410, and C(t) is given in the following table:

TIME C(t) TIME C(t)
00.00 - 08.00 0.1 13.00-18.00 1.0
08.00 - 09.00 0.4 18.00 - 22.00 0.4
09.00 - 12.00 0.8 22.00 - 24.00 0.1
12.00 - 13.00 0.6 w/e public holidays 0.1

The charge rate for jobs submitted to CMSBATCH is as in the table, except that it never rises above 0.4.

The number of AUs used at any time in a terminal session can be determined by issuing the AUS command, which also returns the CPU time, connect time, spooled i/os and non-spooled i/os used for the calculation. The current charge factor and ration for the account can be obtained using AUS CF and AUS RATION. As can be seen, the most expensive time is on weekday afternoons. Issuing the AUS command periodically can help users to find out where they, in particular, are using most AUs .

The most significant contributors to the AU charge for a session are normally the CPU time used and the non-spooled i/o. Non-spooled i/o is any transfer of data which does not involve the spool, for instance reading or writing files on disk. Spooled i/o refers to transfer of data between devices which do use the spool, ie virtual and real printers, punches and readers.

The most common commands which use AUs heavily tend to be large compilations, assemblies and running jobs. Optimisation in compilations can increase the cost dramatically and it may well be worth requesting no optimisation while debugging. The Fortran GI compiler is much cheaper than the HX compiler when the HX compiler is requested using full optimisation (which is the default). In a test using a program of roughly 450 lines of Fortran, the FORTHX compiler was more than four times as expensive as FORTGI. At present there is no optimisation available for the FORTVS compiler and it compares roughly with FORTGI.

Large compilations and jobs are best run in the CMSBATCH machine which has a lower charge rate than normal CMS machines during the day. The output is returned via the spool and can be examined using the BROWSE command. This is much cheaper than reading the returned files on to a disk to look at them. Any files which require printing can be transferred directly to the real printer and then the only non-spooled i/o involved in processing those files is that done by the BROWSE command.

Another expensive command is LOAD for large TEXT files. When possible, a MODULE file should be generated using LOAD and GENMCD so that the loading is only done once and subsequent runs can use the module.

If a high proportion of large files exist on a disk, it may be worthwhile to re-format the disk to use a larger block size. This cuts down the number of AUs used in reading the files since a smaller number of i/os will be required. However, the number of blocks available on the disk will be reduced so it is not useful for a disk of small files.

Most commonly used CP and CMS commands are comparatively inexpensive in AUs, for instance QUERY and SET commands, MSG, LINK and ACCESS. The AU cost of LISTFILE, BROWSE, HELP and commands such as COPYFILE, COMPARE, TYPE and PRINT obviously depend also on the size of the file being processed. A typical XEDIT session uses AUs in rough proportion to the number of XEDIT subcommands issued. A very rough guide is 0.1 to 0.5 AUs for a half hour of XEDIT at the maximum charge rate.

A good idea of where AUs are being used in a particular machine can be obtained using CP SPOOL CONSOLE START * to spool the console to the users own virtual reader. The command CP SPOOL CONSOLE STOP CLOSE should be used to close the file. If the AUS command is issued periodically throughout a session, the console log can be examined later to find out which were the most expensive commands.

4. WORKSTATIONS AND TELECOMMUNICATIONS

Two major programmes of work are scheduled for the next 6 months. These are conversion from HASP to VNET and in addition a programme of workstation upgrades.

VNET

The initial 'home1 installations of VNET are just about complete. As expected a small number of implementation problems have showed but they have been identified and fixed. RAL workstations will now be converted to VNET at a rate of approximately one per week, with a site specific User's Manual being produced for each change. Any comments on VNET (whether problems or usage) should be sent to Program Advisory Office - CMS ID=US, ELECTRIC ID=US

Workstation Upgrades

In the ICF area a small number of sites are being given larger machines and the redundant machines will be re-configured to provide a larger and improved workstation facility at certain GEC2050 sites. The displaced GEC2050 hardware will be used to enlarge non-networked GEC2050s which currently cannot connect to the SERC network. At other sites existing ICF facilities will be modified where necessary to accommodate an existing population of GEC2050 users and the GEC2050 removed. The following table indicates those sites for which changes have been agreed:

Site RAL connection Current equipment Upgrade
Bangor SERCNET GEC2050 DEC1O-Gateway
Southampton SERCNET GEC2050 GEC4070
Reading SERCNET GEC2050 GEC4070
Durham SERCNET GEC2050 GEC4070
Bristol SERCNET GEC2050 ICF-GEC4090
Sussex SERCNET GEC2050 ICF-PRIME
Surrey HASP-M/L GEC2050 ICF-PRIME
Westfield HASP-M/L GEC2050 SERCNET-GEC2050
Edinburgh- Univ-Phys HASP-M/L PDP11/40 SERCNET-GEC2050
Exeter HASP-M/L GEC2050 PDP11/40

SERC Network

It has been decided to extend the SERC Network with Packet Switch Exchanges to be installed in Edinburgh, London, Cambridge and CERN. The main purpose of these PSEs is to rationalise complex operational circuits where a number of otherwise independent connections share the same physical line. Ideally each PSE will be independently connected to at least two other PSEs in the Network. A further potential effect is to produce considerable long-term savings in Private Wire rental costs. This will in some way affect most users, in that their connection point to the Network will be changed. Service implementations of terminal protocols will need to incorporate the new Network addresses. Schedules for the changes will be circulated as the details become clear. Network access to CMS and VNET is now handled by a virtual machine called VMNCP rather than DKNCP running under FEM and handling HASP and ELECTRIC traffic.

PACX

New PACX software is in use which enables a message stored in PACX to be displayed on making a connection. In addition a number of ports have been changed from ELECTRIC access to CMS and made up to 4800 bps. This process will continue as the use of ELECTRIC goes down.

It is intended to standardise on 4800 bps line speeds in CMS and ELECTRIC. However there are technical modifications that need to be made to the communication lines. A few slots will be retained at other speeds for those terminals which require them.

5. MAXIMUM MVT REGION SIZE

When the IBM 3081D begins service it will provide part of the MVT batch service. The restriction of 1.4 Mbytes maximum REGION size could in principle be eased. Likewise there will be less need to penalise jobs with large core requirements. This will be of particular benefit to programs with large Overlay Structures. Users with programs of this nature are requested to contact J C Gordon in User Interface Group (CMS ID is JCG, ELECTRIC ID is PY). See also the item on Trial MVS.

6. TRIAL MVS SYSTEM

As part of the move to the MVS operating system on the IBM 3081D (see January issue of FORUM, item 2), we will be introducing a trial MVS system, hopefully in the latter part of this year. In order to identify users who would benefit most from time on this system, we wish to contact groups whose work consists of large core jobs but with limited input/output. In particular we would like to hear from anyone who finds the size (1.4M) of the 195 a restriction. Potential users should contact J C Gordon at RAL, exts 6574 and 6111. CMS ID=JCG or ELECTRIC ID=PY.

7. VM SPOOL

Too many files are being left on the Spool for long periods. Users are urged to remove them quickly. Unclaimed files are erased if not removed in reasonable time.

8. EXTRACT FROM MINUTES OF CCSUM - 6/1/82

The following items of interest have been taken from the Central Computing Site Users Meeting held on Wednesday 6 January 1982.

  1. The second string of Memorex disks has now been accepted. The Short Lived Data facility has been expanded from 400 Megabytes to 917 Megabytes and is entirely on 3350 equivalent disks. The next tasks are to convert RHEL02,3,4 and 5 and so expand the Long Lived Facility from 1100 to 1600 Megabytes. After that data will be removed from Setup disks which will then be withdrawn.
  2. The BROWSE facility for ASCII terminals has been installed and is working with no apparent problems.
  3. A CMS version of CPULFT is to be installed in mid-January.
  4. The V S Fortran Library has now become available to users during January.
  5. Logging in to alternative CMS accounts has now been implemented .
  6. The TDMS writeup will soon be available.
  7. From 1 April 1982 it is intended to issue Priority 1 rations in proportion to the authorised allocation.

10. INDEX

List of articles in FORUM 16

16.1    Introduction
16.2    Changes at  RAL
16.3    Central  Computer Procurement
16.4    IBM  VS  FORTRAN  Program
16.5    Extract from minutes of PRIME User Meeting
16.6    Questions raised  at  CCR meeting,   13/7/81
16.7    Courses
16.8    Diary
16.9    Any offers -  information needed  about  tapes 
16.10   Computer  Statistics 
16.11   Index

12. DIARY

USER MEETINGS

The following dates have been decided on for DECsystem-10 Users Committee meetings during 1982:

The time and place for these four meetings is 10.30 am at the James Clark Maxwell Building, King's Buildings, Edinburgh.

Supported Packages on the IBM Central System

Supplement to FORUM No. 20 February 1982

1. SUPPORT CATEGORIES

There are four levels of support, as follows:

  1. MAXIMUM SUPPORT
    • Complete documentation
    • Support always available in office hours
    • Immediate maintenance
  2. HIGH SUPPORT
    • Good documentation
    • Support usually available in office hours
    • High priority maintenance
  3. STANDARD SUPPORT
    • Basic documentation
    • Limited support - list of local experts available
    • Maintenance referred to issuing body
  4. MINIMUM SUPPORT
    • Usually some documentation
    • Support not usually available
    • No maintenance

2. HOW TO FIND OUT MORE ABOUT PACKAGES

The packages listed below are those afforded some level of support at the end of December 1981. Initial queries may be directed to the Program Adviser, who will either deal with the query, or where appropriate, put the user in touch with the appropriate expert. No commitment is implied on anyone to continue support of a particular package but certain packages are offered with minimum notices of withdrawal periods specified. If users have any suggestions as to additions, deletions, corrections or changes which should be made to this list they should contact D. H. Trew, User Interface Group, Computing Division.

The following abbreviations are used in the list below:

BCRG
Bubble Chamber Research Group
CD
Computing Division
DL
Daresbury Laboratory
HEP
High Energy Physics Division
SA
Space & Astrophysics Division
SNS
Spallation Neutron Source Facility
ICF
Interactive Computing Facility
Tech
Technology Division

3. LIST OF SUPPORTED PACKAGES

PACKAGE LEVEL SUPPORT DESCRIPTION
ALCHEMY Min DL Quantum Chemistry
ALGOL Min CD Compiler
APPLE Min BCRG Direct Channel Partial Wave Analysis
ASAS High SNS Finite Element
ASCOLI Min BCRG Three Body Partial Wave Analysis
ASSEMBLER (F) Max CD 360 Assembler
ASSEMBLER (VS) High CD 360 Assembler
ASTAP Min Tech Statistical Analysis Program
ATMOL Min DL Quantum Chemistry
BABBAGE XREF Std SNS Formatted listing of BABBAGE programs
BCPL Std CD Compiler
BDMS Min Durham Berkeley Data Base Management System
BERSAFE PH2 High TECH Finite Element
BERSAFE PH3 High TECH Finite Element
BERDYNE High TECH Finite Element
BMDX72 Min CD Bio-Medical Statistics Package
BUEDDY Std Tech 2-D Eddy Current Program
CAMAL Min CD Symbolic Algebra
CAPSTAN Min AERE Critical Path Analysis
CASORT Std CD Sort/Merge
CCOPY Std CD Lists/Copies Sequential Files
CLIST Std CD Lists or Copies Source Files
CERN Library Std CD Subroutine Library
CFD/CFDX Min CD Translator generates ASK for ILLIAC IV
COBOL(E,F,ANS) Std IBM Compilers
COCOA Min CD Text Processing
COPYTP Std SNS Copies GEC 4080 tapes
CPC Library Std CD/CPC Subroutine Library
CSL Min ICF Simulation
CSMP 1 Min CD Continuous System Modelling
CSMP 3 Min CD Continuous System Modelling
DIRMAINT Std CD VM Directory Maintenance
DMS Std CD Display Management System
DSKSOL Std Tech Solves large sets of linear equations
ELARKIVE High CD Electric File Archive Facility
ELMUG/ELFR80 High CD Output ELECTRIC Graphics Files
ELOUTPUT High CD ELECTRIC Files to Printer/Punch/Tape
ELDIRE High CD Batch Access to ELECTRIC Directories
ELSEND High CD Batch Access to ELECTRIC Files
ELUSER High CD Batch Access to ELECTRIC
ELECTRIC Max CD Text Editing
FAMULUS Std CD Text Sorting and Indexing
FAPLOT (ENPLOT) Std CD/HEP Histogramming Package
FELIB High TECH Finite Element Library
FEMGEN High Tech Finite Element
FEMVIEW High Tech Finite Element Routines
FORTRAN G1 Max CD Compiler
FORTRAN H Extended Plus Max CD Compiler
FORTRAN VS Std CD Fortran 77 Compiler CMS only
FOWL Std CD Monte-Carlo Phase Space Program
GENSTAT Min CD Statistics package
GEOMETRY,KINEMATICS
& ORACLE
Min BCRG Film Analysis Programs
GEROFF Min CD Text Processing System
G-EXEC Max NERC Relational Data Base and General Data Handling System
GFUN Std Tech Magnet Design
GINO-F Std CD Graphics
GPSS Min CD Simulation
GRAPHICS/SUMX Min BCRG SUMX with Graphics Options(MUGWUMP)
Harwell Library Std CD Subroutine Library
HBOOK Std HEP Histogramming Package
HPLOT Min HEP Graphics (part of HBOOK)
HYDRA Std HEP Program Management with Dynamic Memory
IBM Utilities Std CD (some)
ICCG Std Tech Solution of Sparse linear equations
INFOL Std HEP Database Report Generator
JSPLOT Min BCRG Histograms/Scatter Plots via MUGWUMP
KWIC360 Min CD Text Processing Index System
MAST High CD Message Switching
MINUIT Std Oxford Minimizing Package from CERN
MORTRAN Std HEP Structured FORTRAN pre-processor
MTUT Std> SNS Initialises and checks GEC 4080 tapes
MUGWUMP High CD Graphics Package and Filestore
NAG Library Std CD NAG Subroutine Library(MARK8)
NAP Min Tech Circuit Design
NEWPAC Min Tech Finite Element
OLYMPUS Min CD Standardises Program design/structure
OSDITTO Std CD Tape and Disk Utilities
OSFLOW Std CD Flowcharting Program
OPAL MIN HEP OPAL-GEANT Monte Carlo Software
PASCAL Min CD Complier
PATCHY Std HEP Compiler Source code maintenance in multi-version programming
PE2D High Tech To Solve 2 Dimensional Equations of the Laplacian, Poissonian, Helmholtz or Diffusion Type
PFORT Min Tech Fortran Verifier
PITFAL High CD Location of Program Abend Addresses
PLANT/SUPPLY High CD Dynamic Substitution Facility for CMS
PLUT078 Std CD Molecular Drawing Program
PL/1 (F) Std CD Compiler
PL/1 (optimizing) Std CD Compiler
PL360 Min CD Compiler
PPE Std CD Problem Program Evaluation
PREIFP High Tech Interfaces FEMGEN and PACK
PREVIEW High Tech Interfaces FEMVIEW and PACK
PRINT Std SNS Prints files on a GEC 4080 tape
PURL Std CD Pre-Publication Text Preparation
REDUCE Min CD Algebraic Manipulation
RL Library Std CD Subroutine Library
RL Utilities Std CD RL Program Library Utilities
SCRIPT Std CD Text Processing System
SIMULA Min ICF Simulation
SHELX Min DL Crystallography
SMART Std CD Real Time Monitor for VM
SMOG High CD Graphics
SPICE Min ICF Circuit Design
SRAM Std CD CASORT Sort/Merge callable from high level languages
SSP Library Min CD Scientific Subroutine Package
STACKER Min CD 7 to 9 track Tape Conversion
STAGE2 Min CD Macro Processor
STAIRS Min CD Information Retrieval
SUMX Std CD Statistics Package
TDMS Std CD Disk and Tape Management
TPELEC Std SMS Copies 4080 tape files to ELECTRIC
TRANFILE Std CD File Transfer between CMS/OS & CERN,DESY,DL
TRANSFER Std SNS Copies ELECTRIC files to 4080 tapes
TRIAB Min HEP Book-keeping system for the analysis of experiments/tape management
VICAR Std SA Image Processing
XEDIT MAX CD Display Editing System for CMS
XRAY74 Min DL Crystallography
ZBCOK Std HEP A Dynamic Memory Management System
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site