Contact us Heritage collections Image license terms
HOME ACL ACD C&A Literature Technology
Further reading: □ Overview □ 1981 □ JulyAugustSeptemberOctoberNovemberDecember □ 1982 □ JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember □ 1983 □ JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember □ Index of issues □ Index
INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
C&ALiteratureNewslettersFORUM
C&ALiteratureNewslettersFORUM
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1981
JulyAugustSeptemberOctoberNovemberDecember
1982
JanuaryFebruaryMarchAprilMayJuneJulyAugustSeptemberOctoberNovemberDecember
1983
JanuaryFebruaryMarchAprilMayJuneJulySeptemberOctoberNovemberDecember
Index of issues
Index

No 31 January 1983

Forum 21-41 Banner

Forum 21-41 Banner
Full image ⇗
© UKRI Science and Technology Facilities Council

1. CENTRAL COMPUTING REPRESENTATIVES MEETING NOTES 22 NOVEMBER 1982

Owing to shortage of space in FORUM 30 not all the meeting notes from the Representative's meeting could be published. The notes which were omitted are now presented here.

WORKSTATIONS AND TELECOMMUNICATIONS

VNET Conversion Programme

For a variety of reasons this has proceeded more slowly than hoped.

  1. The conversions of Royal Holloway College and of Imperial College were delayed due to reasons unrelated to VNET.
  2. Several workstations which could have been converted are in fact due to be replaced. Their conversion will follow or coincide with the installation of the replacement machines.
  3. It was decided that the ICF software to be used by many workstations should be modified. This has now been done.
  4. Problems were encountered with Daresbury style PDP workstations. These are believed fixed but final tests have still to be carried out.
  5. In addition to the above factors a number of additional problems emerged. Out of 40 reported problems 15 have been fixed, 5 are about to be fixed, 5 may be avoided by a change in operational procedure, 3 lack sufficient detail to study, leaving 12 to fix.

Current plans are to convert the following sites: RLGB, APPLETON, READING, SOTON, DURHAM and DARSBRY. This will be followed by further GEC4000 and Daresbury PDF machines during January and February. A schedule will be published which will also include plans for other workstations.

Improvement of Network Access to the RAL IBM's

We are in the process of implementing multiple connections between both packet switching exchanges (PSE-1 and PSE-2) at RAL and the IBM systems (MVT and VM). Currently, a simple 'wide-band' connection is used between RAL PSE-1 and each of the IBM mainframes. The existing and the new connections all run with the less efficient 'Binary Synchronous1 link protocol. Consideration is currently being given to the use of an intermediate processor to act as a High Data Link Control (HDLC) converter, interfacing to a block multiplexor channel. If this project is successful, an even bigger improvement in data throughput and network response would come about.

London Network Gateway

Approval has been given for the establishment of a gateway between SERCNET and the network based at ULCC in London. This will eventually lead to a much improved service for SERC users within London University and improved access to the current 'Metronet' machines. The mechanisms for using this gateway have not yet been defined and the machine to be used for this function has to be provided. The new service is expected to become available during the first quarter of 1983.

JNT PADs

Soon appearing on the scene will be new devices called JNT PADs (made by Camtec Ltd). JNT PAD stands for 'Joint Network Team Packet Assembler /Disassembler'. The primary function of these devices is to assemble input from terminals into Network Packets for transmission to a computer, or to receive output packets from the computer and disassemble them for presentation to a terminal, or eventually to a printer. The device provides a very economical means of supporting up to 16 terminals on a simple network connection. Currently, software is loaded from cassette recorders although a down-line loading option is being worked on for the near future. It is envisaged that these devices will become available with the GEC2050 replacement programme and provide networking terminal support for certain local areas, eg in buildings or on sites. In particular, these will replace the Rutherford PACX service.

Data Communications

There is a widely held belief that many of the intermittent faults, and quite a few of the solid faults, on leased private wires are due to equipment associated with both speech and signalling facilities. An early requirement for fault diagnosis in telecommunication circuits was that speech on a private circuit would make diagnosis quicker and more effective. It was for this reason that all RAL circuits were provided with these facilities.

Practical experience shows that the use of these in-built facilities during diagnosis is rather cumbersome. The co-ordination of transfer between speech and data profiles is not always as easy as would appear. In practice therefore, there is a tendency to use a second channel (ie a call over the public telephone network) to co-ordinate such tests. It therefore seems to be in the general interest that this should be removed and a programme for this is to begin. Ideally, this would require that a telephone with access to the public network should be available within a reasonable distance of communications equipment. In general, the use of such telephones would be for calls originated from RAL, though it may occasionally be necessary to initiate a diagnosis by a short call into RAL. Where such a telephone does not exist we would like arrangements to be made for its provision. This activity will be co-ordinated from RAL and is expected to take place during the first half of 1983. There will be some planned interruptions to services while the work is carried out. The final result should be an improved service at marginally lower cost.

CERN Link

A Networked Job Entry (NJE) link has now been established between the RSCS (VNET) machine at RAL and the IBM complex at CERN. This link provides the following facilities:

  1. Job submission from CMS or MVT to the CERN IBM system,
  2. Job output (print or punch) from a CERN job to a CMS virtual machine,
  3. Issuing of a command from CMS to interrogate JES2 at CERN,
  4. File transfer between CMS or OS disk at RAL and IBM datasets at CERN.

Further facilities are planned but they will require more testing/development.

The CERN IBM system is known as node GEN to VNET. It also has an alias RM102. The status of this link may be found by typing:

VNET   Q  GEN (from   CMS)
Q GEN (from  a  VNET  workstation)

as for any other VNET link. The RAL VM system is node RLVM370 also known as node N4 to CERN.

The EXEC SUBCERN on the U-Disk may be used to submit CMS files to the CERN IBM system for execution.

The ELECTRIC obey file JB=B2B.OSUBCERN(NJ) can be used to submit jobs to the CERN IBM System for execution.

A job executing on the CERN IBM system can send output to the virtual reader of a CMS machine at RAL by a JES2 ROUTE card in the CERN job.

eg

/*ROUTE  PRINT  RLVM370.<CMSID>
or
/*ROUTE  PRINT  NU.<CMSID>
or  
/*ROUTE  PUNCH N1.<CMSID>   for  punched  output.

For a job submitted from Wylbur:

RUN   DEST N4.<CMSID>

The exec CERN on the U-Disk may be used to send a command to JES2 on the CERN IBM computers and return the reply to the users terminal.

The calling sequence is :

CERN  < JES2 Command  > 

Workstation Upgrades

In the ICF area a small number of sites are being given larger machines and the redundant machines will be reconfigured to provide a larger and improved workstation facility at certain GEC2050 sites. The displaced GEC2050 hardware will be used to enlarge non-networked GEC2050s to enable them to be connected to the SERC network. At other sites existing ICF facilities will be modified where necessary to accommodate an existing population of GEC2050 users and the GEC2050 removed.

The following table indicates those sites for which changes have been agreed:

Site Current equipment Upgrade Comments
Bangor DEC10-Gateway Replaced GEC2050: was RM90
Durham GEC4070 Done July 82
Westfield GEC2050 GEC 4080 Jan 83
Edinburgh (Univ-Phys) GEC2050 GEC4080 Jan83
Leics (Univ) GEC4090 New M/C at Leics Poly with links
DESY GEC2050 GEC4065 Mar 83
CERN GEC2050 GEC4065 Mar 83

PACX Service Names

A new version of the PACX software was introduced on Monday 2 August. The important change was that services may now be selected by alphanumeric names eg CMS, RLGB. Speed selection will be done automatically. However, if a terminal set at say 4800 baud finds that there is a queue for the service, the user will have to change the terminal speed before attempting the same service at another speed. PACX can recognise SERC network names plus CMS, ELEC, and CERN. The existing numerical system will no longer be appropriate in most cases.

RHELIB

The MVT version of the MVT1D routine has been rewritten so that it now returns the hardware machine identification (currently 3032 or 3081) instead of the software identification which can be unreliable.

The writeups For the RHELIB routines can now be accessed on CMS via the help system. 'HELP RHELIB MENU' will give a full list of routines, while 'HELP RHELIB x' will access the writeup for routine X.

CERNLIB

A number of bugs have been fixed in the library. In particular, the routine INTRAC in GENLIB now works correctly and an overwriting bug in TIMEX has been cured.

As mentioned in an earlier FORUM (26), the routines UZERO, etc in SYS1.CERNLIB and KERNLIB TXTLIB R (but not CR.PUB.PRO.GENLIB4) were replaced by the versions previously in RHELIB. These have since been modified so that they issue a warning message if they are not called with the correct number of arguments.

The MINUIT package has been installed on MVT and CMS as part of the standard CERN library. Its usage differs slightly from the previous versions available at RAL. See 'NEWS MINUIT' for details.

The CERNLIB short writeups can now be accessed on CMS via the help system. They have been modified so that they are reasonably presentable when output on a terminal but they do contain non-printing characters such as Greek characters and subscripts which will normally appear as percent signs. They are accessed using 'HELP CERNLIB MENU' for the menu, and 'HELP CERNLIB name', where 'name' is the catalogue name, eg B102.

SMALL ITEM

RLR31 - The RLR31 workstation has been removed from service by NERC whose computing service no longer has an office at RAL.

2. QUESTIONS RAISED AT CCR MEETING (22/11/82)

Q1. It does appear that the setup turnround for small jobs less than 350K is noticeably slower.
A1. The consequence of changing job classes and initiator settings plus lack of tape drives is the main cause. Users are now able to run 1.5Mb 5 minute jobs in prime shift. We are monitoring the performance and will publish new guidelines when we are confident that they are reliable. Users should ask for more disk space to assist their development rather than use tapes. There is space available. Ask PAO, who refer large requests to Resource Management when necessary. (D G House)
Q2. Have any figures been produced for the new machine, in terms of CPU hours that it is expected to deliver?
A2. It is too early to be certain of the ability of the new configurations to produce CPU hours (195 equivalent). Indications are that the loss will be less than 20%. When we are sure of the delivery of hours we will tell you. (D G House)
Q3. (a) We sometimes get a slow response from CMS, up to 5 minutes delay. (b) If one accesses the network via a non-networked workstation a funny charcter appears at the start of each line.
A3. You are using an unsupported route to CMS via the MVT system which can cause severe delays and spurious output. Would users who cannot access CMS via a supported route (PACX, network, VNET) please contact the PAO. (T G Pett)
Q4. The Division wishes to get rid of their card punches. Is the next step to get rid of card readers?
A4. No. You can run card readers without the 2821 card reader/punch controller. The latter is proving to be a great source of unnecessary expenditure. We expect the use of cards to decrease but while cards are used we will provide readers. (D G House)
Q5. One of our production programs is now taking much longer to execute. Can you explain why?
A5. Yes. The floating point calculations on the IBM 3CS" are much slower than on the IBM 195. Please notify Dr M R Jane if you find that your programs are running more slowly, and in particular, if they are not floating point calculations. For genuine hardship relief will be given. (D G House)
Q6. Will the new MVS system say how much the job costs in pounds (like CERN where they charge in terms of Swiss francs)?
A6. This is an interesting thought. We are considering it now it has been asked, but not before MVS though. (M R Jane)
Q7. How will the MVT systems be affected by MVS testing and will the amount of memory be reduced?
A7. MVS development will take resources. It will be measured and monitored so that its impact on the users can be kept to a minimum. The new system will become more visible when the trial period starts at the end of 1983. (D G House)
Q8. Should users make use of their old tapes?
A8. Users are requested to ask themselves (a) Is my tape data required and if so (b) Is it readable? Please notify Operations if you intend to check out your tapes - it is a big job. If you no longer require any of your tapes, please give the numbers to the Magnetic Tape Librarian. This is a real and urgent matter. We have real estate problems. (D G House)
Q9. Quite often one simply wants to add an extra file to the end of a tape. To do this current practice, certainly at UCL, is first to do an XTAPE to find the last file and then copy the data. Is it possible that TDMS could be modified to keep track of the last file on a tape?
A9. It is not possible to change TDMS to do this since there is no easy way of guaranteeing that we can supply it with accurate data. The data management systems we are considering in MVS will keep track of data sets and will have back-up and archiving facilities for copying files to tape. It is probably best to wait until we have ascertained what facilities can be provided on MVS and see if they satisfy the requirements. (A R Mayhook)
Q11. The cost of disk transfer for paying customers is higher than tape transfer.
A11. Yes. We will review the charging algorithm in time for the new Financial Year, 1 April 1983. (M R Jane)
Q12. Will the Division purchase an Automatic Tape Loader(ATL)?
A12. The ATL has been considered but current opinion is that it is not a suitable device for bulk storage of data. It is fairly expensive and the fact that it would probably be the only one in the UK means that additional heavy costs would be incurred in providing adequate maintenance. It is also reputedly difficult to maintain and requires a lot of off-line maintenance to keep it going. (A R Mayhook)
Q13. The use of both raw and summary data would not appear to solve the tape problem even with an MSS. Such large amounts of data will almost certainly have to be done on tape.
A13. The processing of very large data sets will probably be done best from tape. However it is not obvious that all the jobs which currently access large data sets need to do so. Many of them read a small part of the data set only. When an MSS becomes available methods of processing data should be adapted to take advantage of the characteristics of the device. It will probably be better to do as much work as possible with small subsets of data on disk or on the MSS and use tape only when it is really necessary to process large volumes of data. (A R Mayhook)
Q14. In general users tend to copy from back-up when an original tape has parity errors. MSS would reduce considerably the amount of tape mounts.
A14. With an MSS and a good data management system the methods currently used for backing up data will change. The loss of data when stored on disk and MSS ought to be lower which will reduce the need for mounting back-up tapes. It will also be possible, given sufficient space, to keep back-up copies of data sets on the MSS. (A R Mayhook)
Q15. Are there any plans to allow CMS batch jobs to run overnight?
A15. We are looking at two possible batch monitors for controlling CMS batch jobs. Both of these function by having a controlling monitor virtual machine which schedules jobs to run in one of a number of slave machines according to class, priority, CPU time, core requirement etc. Both of them allow batch jobs to be run overnight. A decision will be made shortly and one of these systems will be installed by the end of January. (T G Pett)
Q16. Can you outline what the advanced CMS course will cover?
A16. The course will cover VNET, XEDIT, EXEC2, XPLANT, GRAPHICS and various small items. (T G Pett)
Q17. It is quite possible that one may need a facility to set different distribution codes, depending on where the output is to be printed.
A17. Yes, the need is recognised. We may need a method of changing the distribution codes and this needs study. (P J Hemmings)
Q18. What does the Division intend to do with the ELECTRIC edit files and archived files when ELECTRIC is rundown?
A18. ELECTRIC files in the on-line filestore will be accessible from CMS via CMSELEC. This allows files to be copied into CMS using all the ELECTRIC group edit facilities. Archived files will only be accessible if they are first restored to the on-line filestore and a method of doing this will be provided. (T G Pett)
Q19. Whom does one approach for extra CMS space?
A19. Requests for up to 10 cyls (5 mbytes) total should be made to PAD, above that limit to Dr M R Jane, Head of Resource Management and Communications. (M R Jane)
Q20. Is anything being done to provide the network workstations with full-screens?
A20. We have a development which enables Cifer terminals (the current standard) to be upgraded very cheaply to emulate (as an option) the IBM 3270 screen terminal. This will work both on PACX and over the SERC network. It is now under test in Computing Division and shortly we will begin a trial with selected external users. If this is successful, full screen facilities can then be made generally available. Although the current tests are very promising we are reluctant to give a timescale until an external user trial has begun in case rigorous use throws up problems which require further development. (C J Pavelin)
Q21. It is very difficult for new users to get used to the different commands required for working with OS data sets, like MVTDISK, LISTDS, OSCOPY etc.
A21. The names of these commands will be rationalised as far as possible when MVT is replaced by MVS. In the meantime users can provide their own synonyms which can be defined in the PROFILE EXEC. (T G Pett)
Q24. Some users find that the jump from the CMS Introduction to the CMS Users Guide is too drastic. Is it possible for an intermediate manual to be produced?
A24. It is unlikely that another manual will be produced as we are fully engaged in revisions of existing manuals. However, we continually monitor the state of our manuals and welcome user views. Please forward specific problems so that we have as clear a picture as possible. (R E Thomas)
Q25. Because the system was reconfigured during System Development on a Thursday the user only noticed a problem late on Friday afternoon.
A25. We do note the point and will try to ensure that the Wednesday morning sessions are used for changes which may affect user batch jobs. (P J Hemmings)
Q26. Will all datasets with USER.MON be deleted when the new dataset names convention is established?
A26. In due course the Freedisk housekeeping utility will cease to regard such dataset names as legal and therefore they would be deleted. We cannot make a decision on when to do that until a revised chapter C6 of CIGAR has been issued. (P J Hemmings)
Q27. User uses a /*NEEDS card because his job runs out of time.
A27. Any gross anomalies in the relative charge factors between the 3081D and the 3032 are being reviewed, with any changes being planned for the next Financial Year, 1 April 1983. (M R Jane)
Q28. Should one get the same amount of CPU time on the two machines for a given time limit?
A28. On average yes. Unfortunately the relative performance on different machines varies from program to program. For example, the 3081 is relatively poor at double length floating point, relatively good at running compilers, compared to its average performance against a 3032 or 360/195. There is nothing we can do about this. It is a consequence of having different processors. However, the Atlas 10:3081 ratio looks more constant over different programs, so the problem may go away in the future. (C J Pavelin)
Q29. Is it possible to trade ELECTRIC space for CMS space?
A29. Yes, after discussion with PAO who may wish to refer user to Dr M R Jane for large amounts of disk space. No user should use the excuse of shortage of CMS space to avoid moving from ELECTRIC to CMS. (M R Jane)
Q30. On the subject of user friendliness, the HELP system is fine provided you know what command you want.
A30. A log is kept of all help requests which fail and this is periodically reviewed. Also a general help facility has been provided. We hope therefore to cater for those who are 'nearly' right. (R E Thomas)
Q31. The Oxford workstation printers stop printing if one is offlined. So frequently users arrive in the morning to find a queue waiting to be printed.
A31. This fault has now been demonstrated to Systems Group and is being actively investigated. (J C Murray)
Q33. Recently we had a job that produced parts of programs that did not belong to us. Why was this?
A33. This problem was caused by a HASP spool corruption earlier last month. This was the reason for the cold starts. (D G House)

3. REROUTING OF OUTPUT QUEUED FOR VNET

Recent extensions to the HRESE exec and associated processing by JOBSTAT, now permit output files, queued for VNET, to have their destinations changed. Since job output may be controlled by various machines on its passage through the system there are inevitably some limitations. The aim here is not only to define how such resetting of ouput destination can be done but also when and why it may not be done.

The command format for resetting output destination is as follows:

HRESE jno (<ACCT acct>< ID id><ROUTE destination>

jno -specifies the HASP job number

acct -gives the account number under which the job was run

id -gives the user identifier under which the job was run

destination -specifies the output destination and is composed as follows:

primary<.secondary><(qualifier)>

This is a single parameter and no embedded spaces are permitted.

primary specifies either:

Remote w/station given by either its full name or its alias, eg RLR26 or REMOTE19 or RM19. When the output is under the control of HASP no secondary value is permitted. When under the control of VNET the secondary field defines the category associated with the output file.

or RLVM370 This is valid only when the output is under the control of VNET. It is a request to send the output to the virtual machine named as the secondary. If secondary is omitted then output will be produced on the VM system peripherals.

The (qualifier) field specifies the output type which is affected by the HRESE command. By default only print output will be reset. If the qualifier is given as (PUN) then only punch output is reset. If given as (ALL) both printer and punch output will be reset.

The following limitations are imposed on the HRESE command.

  1. It is not possible to reset output destination from a VNET workstation to one controlled by HASP after the output file has been passed to VNET.
  2. It is not possible to reset output destination of a job controlled by HASP to a CMS machine.
  3. It is not possible to reset output destination to a remote not having the appropriate output devices. An attempt to do so will cause an error message and the termination of the command.
A W Burraston - Systems Group

4. DIARY

IBM PREVENTATIVE MAINTENANCE DATES

Routine Maintenance on the 3032 IBM computer is currently undertaken once a month on Thursdays between 18.00 and 22.00 hours. The probable dates for the remainder of 1982 and 1983 are undecided, but adequate notice will be given.

AIR-CONDITIONING SHUTDOWNS IN 1983

The date of the next shutdown of all computer systems (except network equipment) for the maintenance of air-conditioning plant has now been fixed. It has been scheduled as follows:

1600 hrs on Fri 8 April till 0745 hrs Mon 11 April

The duration of the shutdowns has now been agreed with Engineering Division. There will be 2 shutdowns each year, one in the spring, date of which is given, and the other in November. The autumn date will be published later.

COMPUTING DIVISION COURSES

The User Interface Group intends to run a number of courses for users of the IBM and Prime Computers, at the Atlas Centre, during 1983.

4 × IBM New Users Courses

The course is designed for those people who have been using the IBM systems for a few months and are ready to learn more about the facilities, including both Batch and 'Front-End' (simple CMS).

Dates are: 21 - 24 February, 25 - 28 April, 4-7 July, 24 - 27 October.

3 × ELECTRIC/CMS Conversion Courses

This course will introduce those who are currently using the ELECTRIC system to the facilities of CMS. Most candidates should have attended the 'IBM New Users' course.

Dates are: 6/7 April, 29/30 June, 19/20 October.

3 × Advanced CHMS Courses

This is to introduce those who are regular CMS users to the more advanced facilities and RAL enhancements to the system. All candidates should have attended the 'ELECTRIC/CHS Conversion Course'.

Dates are: 16/17 March, 20/21 July, 16/17 November.

2 × Prime New User Courses

This is to introduce users to the facilities of the Prime Computer.

Dates are: 23/24 May, 21/22 November.

For further information and enrolment, please contact the Program Advisory Office (0235 446111 or ext 6111) or R C G Williams text 6104).

6. REPORT ON THE DATA USAGE PRESENTATION TO CCRM 22 November 1982

This is a summary of the information to be found in an RAL Computing Division internal paper CCTN/P43/82 which is available from the secretary of Systems Group.

Initially some raw statistics were presented on the current usage of all types of data. This was done in order to give a feeling for the magnitude of the problem.

Disk Usage

User disk contents (files) 5000 approx.
Most popular file size(Kbytes) 20

85% of all files were created in the last 7 mths and 90% of all files were used in the last 6 mths

Tape Usage

Different tapes used per day approx 200
Tape mounts per day 600-800
Local library size 6000>
Total library size 55000
Library growth rate(tapes/week) 100

Tape Contents

Average no. of files per tape 7
Average data/tape (Mbytes) 62.5
Mean file size(Mbytes) 8.5

This was followed by a description of a model which will enable an understanding of the implications of various actions.

Figure 1

Figure 1

Figure 1 shows the disk space required for a system which keeps files of a certain size or less on disk for a period of time without them being used and then archives them to some alternative medium. It is a plot of the disk space (in Mbytes) required against the length of time a file is kept on disk (in days). This plot is repeated for various file sizes ( in Mbytes).

Figures 2 and 3

Figures 2 and 3
Full image ⇗
© UKRI Science and Technology Facilities Council

Figure 2 shows the number of tapes in the library plotted against the number of days since that tape was last used and represents a measure of the tape use pattern. If it is assumed that the pattern of tape use can be applied to tape files which were transferred to disk and that this is equally applicable to all size subsets of this then we may estimate the data transfer rate (Mbytes/day) between the primary (disk) and secondary (tape or MSS) storage. This is done in figure 3 for different sizes of tape dataset.

Figure 4

Figure 4
Full image ⇗
© UKRI Science and Technology Facilities Council

Figure 4 shows the tape mounts which would still be required if tape datasets of various sizes were kept on disk instead of tape and enables some estimate of the saving in operator time which could be achieved.

A P J Lobley - Systems Group

7. APPLYING FOR A PERQ

What is a Perq?

The Perq is a powerful single-user minicomputer, capable of approximately one million high-level assembler operations per second. At least as important as the cpu power available are the high-quality A4 graphics display (resolution approx 100 pixels per inch) and the high interaction rate. These features mean that the Perq is well-suited to the role of a personal scientific workstation.

The recommended Perq configuration is: cpu, 1 Mbyte memory, 24 Mbyte Winchester disc, display, tablet and puck. I/O ports are provided by one each of RS232 and IEEE 488 CGPIB) interfaces. At present the POS operating system, together with Pascal and Fortran 77 is available. Over the next few months UNIX Version 7, with new Pascal and Fortran 77 compilers will be released. This version of UNIX will offer virtual memory and a full 32-bit address space and the compilers will also offer 32-bit addressing.

At present communications to the Perq are limited to the Chatter system, which enables a relatively low speed connection to other machines via the RS232 interface. Developments are in hand to provide both Cambridge Ring connections and X25 (SERCNET) access. Hardcopy output devices are yet to be announced by ICL although it is known that the Versatec V80 electrostatic printer/plotter will be available in a few months.

Who owns the Perq?

It was originally proposed that Perqs be supplied on loan for the grant period to grant-holders whose requests for them had been approved by the appropriate SERC committee. This policy has now been changed (retrospectively where necessary). Perqs are now treated in much the same way as other equipment purchased for a grant, the exception being that they are purchased and maintained for the grant period, centrally by SERC. At the end of the grant period the Perq is owned by the grant-holder, in exactly the same way as other equipment. Note that this means that maintenance costs become the responsibility of the grant-holder's institution unless a further SERC grant has been obtained to cover such costs.

There are some exceptions to the above paragraph. Some Engineering Board committees have organised small loan pools of Perqs for specific (usually short-term) tasks. The areas concerned, with suitable RAL contacts and telephone extensions are:

How do I apply?

Perqs are applied for in the same way as other equipment on SERC grants - via section 20 on the RG2 application form. The only difference is that costs should rot be inserted. This will be done by Central Office staff, with advice from RAL if necessary. In this way SERC can take advantage of bulk purchase discounts and maintenance and any price reductions (costs in general are falling rather than rising). A typical section 20 entry might read:

(1) ICL Perq with 1 Mbyte memory ...
(2) Maintenance cost for grant period, COSTS TO BE SUPPLIED BY SERC ...
(3) Any other equipment £ cost
... ...
... ...

Note that the cost of the Perq is part of your grant cost. In no way are Perqs 'free'!

It is obviously necessary to be aware of approximate costs. Currently the cost (including VAT) of the recommended 1 Mbyte memory Perq is £18700 (this includes software costs). The maintenance charge is £115 per calendar month. Both these prices are subject to change, so do not put them on the RG2. Other useful costs to know (these should be quoted) are:
Cambridge Ring connection £1.5k
X25 connection £2.0k + cost of line to PSE + cost of port

There may well be occasions when extra advice is necessary. In such cases it is best to contact me, preferably well before the closing date for grant applications, on ext 6491 or by direct dialling 0235 44 6491

.
K Robinson - Applications Group

8. IBM SYSTEM DEVELOPMENT

System development is currently scheduled on Wednesday mornings from 08.30 to 10,30 and Thursday evenings from 17.30 to 19.30. It should be noted that these times are under consideration and may be changed.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site