Contact us Heritage collections Image license terms
HOME ACL ACD C&A Literature Technology
Further reading: □ Overview □ 1981 □ JulyAugustSeptemberOctoberNovemberDecember □ 1982 □ JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember □ 1983 □ JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember □ Index of issues □ Index
INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
C&ALiteratureNewslettersFORUM
C&ALiteratureNewslettersFORUM
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1981
JulyAugustSeptemberOctoberNovemberDecember
1982
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
1983
JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember
Index of issues
Index

No 37 July 1983

Forum 21-41 Banner

Forum 21-41 Banner
Full image ⇗
© UKRI Science and Technology Facilities Council

1. EDITORIAL

This issue of FORUM contains the report of the Central Computer Representatives Meeting. Although this is dominated by IBM specific material, there is one item of very general interest on Network Mail. A serious attempt has been made to restore the balance by keeping other IBM specific items this month to a minimum.

The problems associated with the move to Rev 19.1 of the Prime operating system have been solved. Additional memory is now on order and will be delivered by 1st August. A draft schedule for mounting Rev 19.1 on each machine is being discussed with the individual sites. The final schedule will be published as soon as it is agreed.

The closure of the Dec 10 at Edinburgh and the impact on the main SERC APL service are brought to users' attention in this issue.

The service overall, despite the abnormally hot weather, has been good apart from some recent problems with VM on the 3081D which resulted in a poor CMS service. Measures to overcome these difficulties have been implemented, but it is too early to say how effective they are.

The central IBM system is currently suffering from I/O contention problems. Users should be aware that we are very conscious of this, and that every effort is being made to come up with a sensible solution. Unfortunately, this cannot be achieved overnight and much careful study is required.

IBM batch usage has increased significantly in the last two weeks, largely due to a number of large High Energy Physics groups starting their production work. The Atlas 10 has continued to perform well and is now producing some sixty percent of the batch cpu hours delivered to users. Discussions are in progress with ICL which may bring forward the handover date by three weeks to 8th August.

The Network Executive vacancies have been widely advertised in the press, and hopefully will lead to a successful recruitment exercise. Two Joint Network Team posts were included in this advert.

Regrettably, we have been forced to reduce the Program Advisory office service from 8th August. Full details are included on page 2.

Mike Jane - Head of User Support Group

2. CENTRAL COMPUTING COMMITTEE MEETING 22/6/83

The major event was the approval for a paper to go to Council recommending the purchase of a 110 Gigabyte M860 mass storage system from MASSTOR International, Assuming it is approved by Council, the system would be delivered in the New Year and we would hope to start a service using it at the same time as the full production MVS service.

A large part of the meeting was spent considering ways in which recurrent costs could be saved in the current financial year and later years. The Central Computing budget has been seriously impacted by the cuts to the various Boards. All Boards other than Engineering have made drastic cuts to their contribution to Central Computing. As a result, a much slimmed down service is envisaged. In the current year, we have moved to DPCE for the maintenance of the Memorex discs and have decided to adopt a new approach to maintaining both terminals and machines. In the case of terminals, we will stop the yearly maintenance and instead keep a set of terminals for replacement. When a terminal breaks, it will be returned to RAL and a replacement sent to the user. Fast express parcel carriage services make this possible, we hope, with no worse a service to the user. The terminals that are broken will then be replaced as a set. In the case of the computers, we intend taking a number of the GEC computers off maintenance and paying for repair as and when they break.

In order to give the operators sufficient time for training in MVS and to reduce the current costs long term, we shall be moving from 5 to 3 operators on the basic 5-shift system. This will have an impact on the number of tapes that can be loaded per week. Although there is a long term plan to move active data off magnetic tape, this cannot begin seriously until the mass storage system and MVS are fully in operation. In the meantime, the amount of resource delivered by the mainframe systems will be less. The exact effect on the user population will not be known until the new system has been in operation for a few months. We apologise in advance to users, but this has been forced upon us by a need to reduce staff costs in the current and future years.

In order to review the way computing is funded and managed in the Research Council and University sector, a joint SERC/Computer Board Working Party has been set up to review the scene and report back by Christmas.

The need to reduce manpower led the Computing Division to explore the possibility of freezing support for the PRIME systems and stop networking support for VAX/VMS systems. There was strong support for continuing both services at the meeting. As a result it was decided that 3 MY of support currently used for central batch service would be moved to supporting the interactive systems. This has already been done and will allow a small amount of network support for VAX systems to continue and enable us to move to Rev 19 of the PRIME operating system.

Prof F R A Hopgood - Head of Computing Division

3. REDUCED PAO SERVICE

The Program Advisory Office service has to be modified in the light of reducing manpower and the consequent unreasonable load placed on the people with the necessary expertise to provide the service.

From Monday 8th August 1983, the PAO will open from 1400 to 1630 Monday to Thursday and 1400 to 1530 on Fridays. During these periods users can visit the office if they wish.

Queries must be sent via GRIPE, ASKUS, TELLUS or NOTEs to US except in cases of extreme urgency when the telephone may be used. Routine and non-urgent enquiries will not be accepted by telephone. Emergency service outside these hours can be obtained by contacting the Shift Leader.

The service may be further reviewed at a later date in the light of experience gained from this change.

Mike Jane - Head of User Support Group

4. DIARY

The following is a list of future courses/meetings.

For further information and enrolment, please contact either the Program Advisory Office on Abingdon (0235) 44 6111 or Garry Williams on Abingdon (0235) 21900 ext 6104.

5. RECENTLY ANNOUNCED HARDWARE

GEC Series 63 Computer

GEC have recently announced a new computer, designated the Series 63. The architecture of this machine differs from the established 4000 series, in that it is based on 16 general purpose registers (32 bits wide) and a 32 bit bus. The addressing limitations of the old 4000s have been overcome by the use of a paged Virtual Address Space using full 32 bit addressing. This gives a virtual address space of 4096 Mbytes for each process of which there may be 4096 in a system.

Initially the system may have a maximum of 12 Mbytes of real store, but this will be increased to 48 Mbytes in future. There is an architectural limit of 256 Mbytes on the store size. The store cycle time is 250 ns and a pipelining mechanism enables the processor to achieve a instruction rate of 3 MIPs. I/O is performed by 'secondary processors', which operate independently of the main processor, and use direct memory access across the bus.

Other important features of the system include comprehensive security mechanisms based on privilege levels, and extensive self test hardware built into each component of the system. Floating point arithmetic is available for numbers up to 128 bits, and there is an optional floating point accelerator for improved floating point performance.

A choice of two operating systems will be offered with the Series 63, UX63 and OS6000. The former is a virtual memory UNIX system derived from UNIX System III from AT & T, whilst the latter is a GEC operating system which provides the user with an environment similar to OS4000.

The standard set of languages is supported, including FORTRAN 77, C, PASCAL, etc. Extensive communications facilities are provided via a front-end processor using standard protocols such as X25 and Ethernet.

Andrew Dunn - Systems Development Group

PRIME 2250

The PRIME 2250 is a small multi-user minicomputer compatible with the PRIME 50-series machine range. This range includes the 550 and 750 machines widely used by SERC. The 2250 is designed for an office environment and is small (1/3 cubic metre) and acceptably quiet. It requires a single 13 amp 250 volt supply. The machine can be configured with up to 4 Mb of memory and two Winchester disk drives of 68 Mb or 158 Mb each in the basic chassis. It also includes as standard a 1/4 inch cartridge tape drive providing up to 15 Mb per tape of off-line storage for backup, etc plus synchronous line for networking and 8 asynchronous terminal lines.

One attraction of the machine for existing PRIME users is its total software compatibility with the other PRIME machines currently supported by SERC, It runs the same operating system, compilers; utilities and networking software, and all applications programs will run on it unchanged simply by transferring the SEG files. This feature is of particular interest in those application areas which involve a considerable ampunt of interactive design work using software on the PRIMES at RAL accessed over the network or -on leased lines. A local 2250 could provide a better interactive environment for this software, particularly if graphics is involved, and could be fully networked into SERCNET to provide access to other machines given that manpower was available to support systems maintenance and networking or another system.

Computing Applications Group in Technology Division at RAL has recently evaluated a 2250 on loan from PRIME as a design workstation for the VLSI community. This application requires the use of high resolution colour raster graphics terminals such as the SIGMA 5688 for interactive design. The 2250 allows these terminals to be connected at 19.2 Kbaud. Benchmarks using the GAELIC interactive 1C layout editor indicated that a 1 Mb 2250 would support 2 to 3 simultaneous users on such graphics terminals with acceptable response, the main limitation being paging. With 2 Mb of memory, 3 to 4 graphics terminals could be supported, the main limitation then being time-sharing of the CPU. The machine would of course support more than 4 users doing less demanding work. The CPU power of the PRIME 2250 was measured as 0.45 of a PRIME 750 for mainly integer computation and 0.34 of a 750 for floating point work.

D R S Boyd - Computing Applications Group-Technology Division

6. EQUIPMENT MAINTENANCE

General

The current financial situation has caused Computing Division to re-examine operational methods in some areas with a view to making savings. Two area of current activity have been studied and alternative cost-cutting strategies have been devised. These activities are the maintenance arrangements for terminals and for some Multi-User Mini computers.

Hitherto the practice has been to set up fully comprehensive maintenance agreements which, for a fixed sum per annum per unit, provide Routine Maintenance, repair service and replacement of faulty parts. This is a very expensive practice, as the supplier has to guarantee availability of spare parts, and trained staff and to cover himself against too many faults per unit.

Examination of records proves that the majority of terminals have very good fault records. This is equally true of Multi-User Mini computers. This allows us to change the policy on maintenance of these types of equipment as described below.

Terminals

A scheme is to be set up based upon the use of a pool of 'spare' terminals, express carriers and bulk maintenance arrangements. Terminals will be repaired at a manufacturer's or maintenance agency's base workshops, or possibly at RAL by a visiting engineer, and the repaired terminal returned to the pool. In general every effort will be made to replace a terminal with an exact duplicate, although sometimes it might be necessary to use a functional equivalent. A table of near functional equivalents follows. Some terminals which are very large or weighty or particularly sensitive to disturbance cannot sensibly be handled with this kind of arrangement. Terminals falling within this category will be subject to contracts based upon time and materials including a fault-call response value. Where it is necessary, some Routine Preventive Maintenance will also be contracted. A working target will be for user service to be returned within 48 hours, and it is possible that this may be easily beaten in a great many cases. This compares favourably with the measured performance of 4 to 5 days on the Cable and Wireless contract. The annual saving from this exercise is expected to be in the region of £100,000.

Terminal Equivalents

GEC 4000 Minicomputers

To assess the impact on service there is to be an initial investigation with ten GEC 4000 systems. The present maintenance contract with GEC costs £400,000 per annum plus VAT.

The alternative arrangements for these machines will involve contracts to cover Routine Preventive Maintenance Guaranteed fault-call response with costs based on time and materials. This scheme is estimated to save in the region of £100,000 per annum without impacting the quality of maintenance support. The machines involved will be a selection of Multi workstations. The initial list of ten if as follows:

Cyril Balderston - Computer Services Group

7. REPORT OF THE CENTRAL COMPUTER REPRESENTATIVE'S MEETING - 29 JUNE 1983

Division Head's Talk

This covered Divisional reorganisation, described in FORUM No 35, and the CCC meeting described elsewhere in this issue.

F R A Hopgood

Developments in the Mainframe Complex

The Atlas 10 is to be handed over in September and will be running production work full time from then. The details of the configuration of the various MVT systems has yet to be decided. An internal MVS service will also begin then, either on the 3081 or Atlas 10.

It has been agreed that from September the 3032 will be taken out of the scientific service and dedicated to SERC administrative computing. Some peripheral equipment (disc, tape, communications) will also be taken, and there will be some additional terminal support overhead put on the 3081, but it is expected to recompense the scientific service by the purchase of new equipment - a solid state paging device is one possibility.

If the MASSTOR M860 is purchased, as recommended by CCC, then apart from giving large amounts of effectively on-line storage to MVS users, it is hoped to use it to offer an archiving service to SERC network users.

One final development is worth mentioning - an inexpensive upgrade to the HAL standard Cifer terminal will be available very shortly to enable it to be used, optionally, as an IBM screen when accessing CMS over the network or through PACX.

C J Pavelin - Head of Systems Development Group

Network Mail

Extra requirements are imposed on computer mail systems by the ability to send and receive mail from other computers on a network. A means for mail services on different computers to communicate, and compatibility between them, must exist. The interim JNT mail protocol provides communication using NIFTP, and through the definition of a standard message header containing a number of fields (e.g. TO:,FROM:,SUBJECT:) that different mail systems on different computers both generate and interpret. Network mail also requires more sophisticated mail systems, for example facilities such as distribution list handling, real name matching and hardcopy memo support. Once sending computer mail to people not on your local computer becomes feasible, there exists the problem of how to address them.

This problem has been overcome within Computing Division by setting up a directory containing the address of every member of the Division. An M printer is used to generate hardcopy memos for those who do not use a computer regularly. These memos are entered directly into the manual post system. Names are stored in the directory as initials and surname only, e.g. P.J.Newton, and the computer on which the directory is kept is known to the other Computing Division computers as RL or RAL. In fact the directory is currently located on RLGB.

JNT mail is supported on GEC, Prime, VAX/VMS, CMS and (shortly) PDP11/UNIX. Users of these machines can therefore send computer mail to a member of Computing Division by sending it to site RL or RAL, addressed to the recipient's real name in the format A.N.Other.

Phil Newton - Computer Services Group

Statistical Analysis System

SAS (A Statistical Analysis System) has been mounted on the 3081 under CMS. This system will be used to provide accounting and performance analysis for the MVS system. Users are being allowed access to the SAS system as an unsupported feature of the central computing system. This talk gave a flavour of the ways SAS can be used to perform some simple analysis of data and a demonstration of the potential results available.

Stella Robinson - Computer Services Group

CMS Mail

The RMAIL command has been replaced by the NOTE and RECEIVE commands from release 2 of CMS. This talk outlined how these commands are used. Plans to modify the manner in which the commands are used to make it more 'user friendly' were outlined.

Dave Asbury - Computer Services Group

8. APL SERVICE

The principal SERC APL service runs on the Edinburgh Dec10, which is scheduled to close down in October 1984. The users of this service have indicated that they will not need an alternative service to be provided when the Dec10 is removed.

This note is designed to warn existing and prospective APL users that unless they contact me by the end of September, stating their requirements, no alternative service will be provided.

Mike Jane - Head of User Support Group

9. GEC EXCHANGE SOFTWARE

In the mid 1970s it was decided to develop an X25 communications network for SERC. Andrew Dunn produced the exchange (or switch) code which ran on a GEC 4000 computer. This first operated in 1977 and the first hosts to be connected were GEC 2050 workstations, the IBM 360/195 and the GECr multiuser minis. Since then the network has grown to 11 switches and around 160 connected computers or DTEs (Data Terminal Equipments). The network was never expected to grow beyond a single switch and a handful of DTEs, let alone to its current and still expanding size. The switch code was as simple as possible so that it would be reliable and easy to produce. Over the years it has been enhanced in many ways but it is now apparent that it needs a radical rewrite, in particular to improve management facilities which are very primitive. Currently these are provided by one of the DTEs continually making calls to all the others. From the results of these 'polls' the state of the DTEs can be found and the state of the network inferred.

Rather than rewrite the code it was decided to adopt the switch code being developed by GEC. This has the required management features and was designed for a large multi-switch network. GEC will support the software and will enhance it follow any developments in standards which arise.

In mid 1982 it was decided to use the GEC code, and Jonathan Mills started working towards its adoption. It was late 1982 when the first suitable version of the GEC code was received. Before it could be used it had to be enhanced so that it could take the place of the SERC switch code without any other changes being made to the network. This involved the following enhancements:-

  1. Ability to load share over 2 or more lines between switches.
  2. Ability to route a new call to avoid a broken line or switch.
  3. Enablement of the variable length SERC DTE numbers or addresses to be used.
  4. Provision of loop back and time stamp facilities in switches.

There were a number of more minor changes. These changes were completed by the end of 1982 and to schedule. Since then testing has taken place which has taken a lot longer than expected. It is vital that when the GEC software is put into service that it performs at least no worse than the current SERC code. The problems have been in a number of areas :-

  1. Testing has only been possible on Thursday evenings at well published times to avoid impacting users too much. (Lately Sundays have been used for more extensive testing.) This has meant that progress has been slow and this has not been helped by a number of irritating problems which have spoilt many Thursdays such as a broken floppy disk and an unexpected reconfiguration of an exchange. These and other problems each caused a week's delay.
  2. During testing a very large number of faults have been found in the GEC software. It was obvious that the code had only really been tested on relatively small stand alone switches. SERC was testing the code on a large switch which has 40 lines. Many of the problems were in the generation of the system when tables and counts were found to be too small. In most cases corrections were easy to make and the changes needed have been reported to GEC.
  3. A number of problems were found with specific DTEs. The SERC code tended to be fairly tolerant of DTEs which do not follow the standards exactly, but the GEC code was not. This led to some discussions as to whether the DTEs should be made to conform or the switch should tolerate non-standard DTEs. It was decided to make the switch tolerant as it was thought that the chances of persuading manufacturers to change their DTE code quickly was poor.

At the beginning of June the large switch at SERC used GEC code for most of a day and proved that the code is now satisfactory for use on the network, or at least that there is enough confidence to try an extended test on the live network. Unfortunately this is not quite possible. The GEC code takes much more memory as it contains management code and is more sophisticated. This lack of store shows itself in poor performance. There is now a half megabyte GEC 4160 on order to replace the 256K switch and the 256K of store released will be distributed to the other switches so that they are large enough to support the GEC code.

Although it had been hoped to switch to GEC code in March it is far more important to ensure that the switch is not going to be disruptive than that it is done quickly. None the less the switch is urgently needed to allow the far more advanced GEC management code to be used. The manpower to work up the GEC code has been quite modest at about 5 man months although the elapse time has been longer.

It is a pleasure to acknowledge GECs help and encouragement in this development.

Paul Bryant - Systems Development Group

10. COMMON BASE PROGRESS

Much has happened since the last Forum article. PERQ UNIX (PNX) became generally available in March of this year; this release of the operating system offered a Window Manager and the C compiler. By the time this article appears, a new release of PNX, with a Fortran 77 compiler and an improved Window Manager should have been distributed. (The system has passed the ICL quality assurance tests; software and documentation are being duplicated at the time of writing (early July).) A pre-release of the Pascal compiler has been made available for use at RAL only, unfortunately). Limited testing has disclosed no problems with this compiler as yet.

GKS development on PERC has proceeded rapidly since the delivery of the Fortran 77 compiler; at present developments are at (approximately) level Oa (no input, simple output) with completion of development to level 1b and pre-release targeted for the end of August.

On the communications front, the asynchronous ftp system, which allows robust file transfer over asynchronous lines to ICF Primes and GECs, has completed its first development phase and user documentation is being prepared. The X25 developments at York for PERQ are now nearing completion, and the end of July should see a prototype working. Autumn should see the completion of the Basic Block Protocol on Cambridge Ring for PERQ. (It is not now clear that the higher level transport Service Byte Stream Protocol will be implemented - any user likely to be inconvenienced by this should contact the author of this article.)

Any problems with PNX or any other Common Base component should be communicated to the Common Base Support Office on extension 6488 at RAL, or by direct dialling to 0235 44 6488. The office is staffed from 10-12 and 2-4 weekdays; the hours may be changed as demand increases. A message recording system is in use outside these times.

Ken Robinson - Common Base Project

11. CLOSURE OF THE SERC DEC 10 SERVICE AT EDINBURGH

The plans to close the Dec10 are well in hand and we have now passed the "point of no return" (ie the plan cannot be changed to prolong the service beyond October 1984). The provision of facilities for the Dec10 users are in hand. The Artificial Intelligence community will benefit from the AI/IKBS initiatives of SERC and the Government supported Alvey program. The remaining users will be provided with resources on existing facilities, and discussions are in hand with these users to identify their specific requirements.

Any Dec10 users who are not satisfied with the way their future needs are being dealt with, should contact the User Support Office at ERCC, or speak directly to me at RAL.

Mike Jane - Head of User Support Group

12. GKS PROGRESS REPORT

Readers of previous editions of FORUM will know that Graphics Section embarked on an implementation of GKS - the future ISO standard graphics system -in mid 1982. Since then GKS has itself become much better known and is rapidly being accepted in the USA as a standard. Work is now proceeding on standards for graphics metafiles, virtual graphics terminals and 3D: all these are using GKS as a backbone.

Our GKS implementation project has achieved many milestones - notably the drawing of pictures! The design work that we started in June 1982 continued into October 1982. This long design phase was partly intentional and partly caused by some significant additions to GKS at its last technical review, at Eindhoven in June 1982. However, the same month we finished the design, GKS drew its first lines.

Since then we have completed work on all output functions - remember GKS has pixel arrays and pattern-filled areas as well as the more usual lines and text. Design work for input and segments has been our task more recently and implementation of these functions is now in progress. We currently expect that coding of the system will be complete in mid-October, with the documentation being finished in mid-November. The development has been done in parallel on PERQ (under both POS and PNX) and VAX (under VMS). We have working (but incomplete) systems on both PERQ/PNX and VAX/VMS. Some time has had to be spent making sure that installation on other systems will be as simple and error-free as possible.

We currently have worked on device handlers for PERQ screen, Tektronix and Sigma terminals and a metafile. Many more device handlers are needed and work on these will be started in late August. The handlers are being written in a way which maximizes the amount of code that can be transported to other operating systems; in addition, the handlers are written so as to share a pool of utility routines, thus reducing the lead time required for completing a new device handler (and storage requirements when running GKS).

Further information on the progress of the project can be obtained from RAL Graphics Section, RAL extension 6565.

Chris Osland - Applied Programming Group

13. CONSIDERED ANSWERS TO SOME OF THE QUESTIONS ASKED AT THE CENTRAL COMPUTER REPRESENTATIVES' MEETING - 29 JUNE 1983

Q How does the current network management fit into the divisional structure? (John Macallister, Oxford)
A Currently, the management of the network is the responsibility of Dr Manning, the Computing Coordinator. There are a set of management meetings which effectively run the day-to-day service. These were described in FORUM No 33. Responsibility for those parts of the network serviced by RAL rests with Computer Services Group. However, this year is an interim one between SERCNET and JANET, the Joint Academic Network. Prof Wells is responsible for running the JNT and JANET management. He is currently recruiting staff for this function and, in particular, an SPSO will be recruited to head this section. Gradually, responsibility for the network management will move to this section. They are an independent section outside the Computing Division's Group structure but living in the Computing Division. (F F A Hopgood)
Q What are the relative capacities of a 6250 tape and an MSS cartridge? (Ian Runford, Liverpool)
A The Masstor M860 can contain up to 175 Mbytes, roughly equivalent to a full 6250 tape. (C J Pavelin)
Q When the trial MVS system is running, will one MVT system be removed, ie will it affect throughput? (John Wheater, Oxford)
A We have not yet decided whether the trial MVS system should run in the Atlas 10 or 3081, nor what the configuration of MVT systems should be. We hope that any effect on MVT throughput will be marginal. (C J Pavelin)
Q Has CD decided to standardise on the IBM VS Fortran compiler? (Ian Runford, Liverpool)
A No decision has been made. Computing Division consider the latest release of this compiler to be a suitable alternative to extended-H and so a possible future standard, but we have not excluded the installation of the ICL Fortran compiler, the same as the Siemens compiler used at CERN, in the Batch System. (J C Gordon)
Q If you use the Network mail in CMS, it is all put into a file NIFTP NOTEBOOK. Can this be improved? (Margaret Curtis, CD RAL)
A There are two reasons for the present situation. Firstly the CMS 'SENDFILE' function (which NIFTP uses) ignores any attempt to give a specific filename if the filetype is NOTE. It always uses the source virtual machine userid as the filename. Secondly, NIFTP does not at present analyse the full header to extract the source information, and the latter could contain characters unacceptable in a filename. Thus there are several difficulties. It is accepted that this needs to be improved, however, and it has been noted accordingly. (P M Girard)
Q I understand the reason for purchasing SAS was to provide facilities for analysing system account records. Will this information be available generally? (James Hutton, HEP RAL)
A In MVS facilities will be provided for Category Reps to access an 'on-line' database containing accounting information derived from SMF records. However SMF records will not be directly available because of the volume of data involved in this process. (A P Lobley)
All other questions relating to SAS have been omitted because the package is not officially supported. All enquiries should be directed to Stella Robinson.
Q What is the difference between UNIT = FREE and UNIT = STORAGE? (Ian Runford, Liverpool)
A UNIT = STORAGE is provided in anticipation of The MVS style of disk space utilisation. All datasets should be catalogued and should begin with a registered 'Subgroup' as the high level qualifier. Such datasets will be placed by the system on a pool of disks and will be retained there as long as they are 'active'. FREEDISK was provided as a means of storing short-term datasets and is thus much less useful. (A P Lobley)
Q When will the MVT service end? (Ian Runford, Liverpool)
A No date has been specified. However, the full MVS service is being introduced in July 1984, and we hope to have identified any users who may have problems in transferring long before then. Thus there should be a rapid move from MVT, and we would hope to close the MVT service very soon after. (C J Pavelin)
Q For backup procedures, will the Atlas 10 run a full CMS service? (Norman Gee, HEP RAL)
A We expect to be able to run a full CMS service and a limited OS batch service on the Atlas 10 in the unlikely event of a 3081 failure. There are plans to test this soon. (T G Pett)
Q How do we avoid conflict between QUERY and QUIT in the NOTE command? (John Hart, HEP RAL) (T G Pett)
A Q with no operand operand means QUIT and Q with an operand means QUERY.
Q The PEEK command is very slow. University of Waterloo has a modification which makes it much faster. Will this be installed? (Steve Fisher, HEP RAL)
A It will only be installed if it it a small modification which can be maintained easily. Our policy is to minimise the changes to standard manufacturers' software. (T G Pett)
Q Is NOTE available globally across the Network? (Paul Lamb, MSSL)
A NOTE can be used to send mail to anywhere on the Network which supports JNT mail facilities. (T G Pett)
Q Will the PRINT command in Electric be available after September? (John Macallister, Oxford)
A Yes. (T G Pett)
Q Why do HSTAT and HCANC fail so often (with 'try again later' reply)? (NA7 Collaboration)
A This is because of the difficulty in making a connection to HASP which is a sub-system running under OS/MVT in a different virtual machine. (T G Pett)
Q Using CMSELEC seems an excessive way to look at graphic picture files. (Graham Thompson, QMC)
A The use of CMSELEC to view graphic files created by MVT jobs is a temporary solution for MVT only. MVS will have a mechanism for routing graphics files into the CMS filestore so that they can be viewed using CMS graphics facilities. (T G Pett)
Q I had no idea that the EXEC command was to be removed from Electric at the end of September until I read the notes. I have 2 PAD students who need to use Electric in full until the end of the year. (Paul Lamb, MSSL)
A The decision to remove the EXEC command was a very recent change of plan adopted by the Computer Services Committee. Special arrangements will be made on request for users who have a valid reason to continue job submission from Electric. (T G Pett)
Q Can CD provide an EXEC to transfer Electric archived files to the CMS archive? Ian Runford, Liverpool) (Anne Grimm, NERC)
A No. You can use the command OBEY M.OBEY.DIRE in Electric to restore a complete directory of archived files. These files can be transferred to CMS using CMSELEC and then archived with the CMS ARCHIVE command. (See also the following reply.) (T G Pett)
Q Moving Electric files into CMS is a major task -is there anything to help? (Joe Garrett, Bristol)
A We are investigating the provision of a CMS EXEC file which will allow a complete directory of Electric files to be copied into CMS via CMSELEC. (T G Pett)
Q What is happening about documentation for the mail service? FORUM is not enough. (James Hutton, RAL HEP)
A The mail service is an internal Computing Division service and documentation has been distributed to the people involved. Any extension outside this present service has many implications which are currently being studied. The documentation will not be distributed outside the Computing Division until the study is complete. (M R Jane)
Q Is there a New CMS Users' course for people that have never used Electric? (Tim Broome, RAL SNS)
A The IBM New Users' Course covers CMS. The content of courses is regularly reviewed when we will consider this comment. (M R Jane)
Q Can we please have an article in FORUM reviewing the current graphics facilities with pointers to documentation and plans for future developments with provisional timetable? In particular, the introduction of GKS and the foreseen metafile facilities. (David Candlin, Edinburgh)
A An article giving the present situation with GKS appears in this issue. Later this year we hope to produce an issue of FORUM with graphics as its major theme. (M R Jane)
Q We have trouble keeping abreast of the constant changes in the system. The documentation needs updating. (Joe Garrett, Bristol)
A No one can argue with your statement. We always do the best we can with the limited effort available. If you wish to influence the priority we have to attach to each specific need, please contact me. (M R Jane)
Q Can system reliability be given top priority? (Barry Whittaker, HEP)
A System reliability IS given top priority. The large majority of problems are associated with old, obsolete systems such as MVT and the CERN/DESY back-to-back links. These are being replaced by much more modern, reliable systems: MVT by MVS and the back-to-back links by VNET/NJE links. In the case of CERN, the NJE link has been operational for several months with very few problems. The DESY link is now working and should be made fully operational very soon. (T G Pett)
Q Can jobs submitted to linked computers be purged when there are known problems with the link? (Barry Whittaker, HEP)
A With the VNET/NJE link, if a fault is detected on the link, any job which is being submitted is requeued in VNET. (T G Pett)
Q Can a method be devised for creating a new data set which will over-ride an old data set if one exists? (Barry Whittaker, HEP)
A The recommended method of deleting a data set which may not exist without causing a JCL error, is to use PGM=IEHPROGM with the appropriate SCRATCH control card. This will cause a non-zero condition code if the data set does not exist, but will not cause subsequent steps to fail unless the COND parameter is used for this purpose. (T G Pett)
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site