Jump Over Left Menu
DCS Projects: Hatfield Polytechnic
S E HERSOM
DEVELOPMENT OF OPTIMISATION ALGORITHMS FOR PARALLEL COMPUTATION
Oct 79 - Sept 81
With the advent of relatively cheap central processor units, computer systems with many CPUs operating in parallel can be confidently expected to be readily available in the near future. The NOC is to investigate how optimisation algorithms should be developed in order to make full use of a parallel computation facility. In the context of this project, NOC are considering parallel processing with multiple instruction. multiple data facilities and are excluding the single instruction devices such as DAP.
Optimisation algorithms written for sequential machines often include sections of code which could be executed in a parallel fashion, for example, the calculation of the partial derivatives of a function when its gradient is required. The improvement in performance, measured in computation time, is predictable and NOC's own work has confirmed that it is attainable in practice. Such parallelisation would certainly be implemented in this proposed project. but would not be considered as part of the research programme.
Algorithms for global minimisation, however, are often based on calculations made at random values of some optimisation vector and it may readily be seen that calculations of the objective function for different values of this vector could be carried out in parallel. Research is required into the effect of adopting such procedures and on how best to implement them.
The previous work carried out by the NOC included the implementation of a least-squares method for finding a local minimum (as opposed to a global minimum) on a parallel system  and it is proposed to extend this work under this research grant.
The following are the main objectives of this project:
(1) To design parallel versions of a number of existing algorithms for global optimisation and to implement them on the available parallel systems. These algorithms would include Price's method  and Torn's method . Considerable work has already been carried out at the NOC on the sequential form of such algorithms. The Price algorithm was designed for use on small computers, and has features which make it attractive as the basis for a simple parallel asynchronous technique. The Torn algorithm is more complex, and is probably the best of the global optimisation techniques investigated at the NOC; it could be parallelised in a number of different ways.
(2) To carry out a programme of tests on the above programs, using test problems already available .
(3) To compare the results of such tests, wherever possible, with predictions based on simulation models of, for example, the queueing behaviour of the various algorithms/system combinations. A suitable simulation program is currently being developed at the NOC.
(4) In addition to the above work on global optimisation, the parallel programs for non-linear least squares problems already implemented  on the dual-processor system in the Department of Computer Studies at Loughborough University would be extended. Such tests would not be confined to that system as the effect of a larger number of parallel processors should be investigated. The performance of the asynchronous algorithm mentioned in  on a larger system would be of particular interest.
PROGRESS TO SEPT 80
Approximately 40% of the grant value was earmarked for equipment to be installed at the offices of the N.O.C. so that remote access could be made to appropriate computer facilities. This equipment, which comprises a VDU, matrix printer and a P.O. line and modem has been purchased and installed.
PROGRESS TO SEPT 81
The project has suffered two setbacks in 1981, firstly Dr McKeown resigned to join the ICL DAP Support Unit and secondly it became clear that the parallel processing systems that it was intended to use would not be available on line for some time.
At this point Dr Dixon commenced more active interest in the project and isolated four application areas where the use of parallel processors were likely to be advantageous in optimising industrial systems.
- systems involving few variables, but where an objective function evaluation implies considerable computing including, probably, simulation.
- systems involving a large number of variables, say of the order of 4000, where the objective function evaluation may only be of the same order as the optimisation overhead. but where the difficulty lies in the dimensionality.
- on-line systems where the implementation difficulty sequentially is essentially computer time.
- the multi-extremal global optimisation problem where, sequentially, the time required for solving even small simple problems is usually too great to be practical.
Progress has been made towards establishing optimisation algorithms for all four systems. An application was made for a separate SERC grant to appoint a supported assistant to investigate the global optimisation problem. He is in post and the progress on that project is reported in the following project description. as is the progress on on-line system optimisation with which it is closely linked.
The interest in industrial systems with few variables and exceedingly expensive objective function evaluations is typified by the investigator's links with Rolls Royce (Bristol) Ltd. A case research student (M. C. Dew) commenced research on the application of optimisation algorithms to their problems in October 1980. He is initially implementing the combination of an exact penalty function approach with recursive quadratic programming proposed in Dixon  which it is believed will be both extendable to incorporate the features Rolls Royce require and be modifiable to run on parallel systems. It is of interest to note that Rolls Royce state that the size of optimisation problems attempted is still limited by the scope of available software.
Some usage has been made of the DAP (SIMD) machine at Queen Mary College in that a modified Newton Raphson algorithm for use on a SIMD machine has been proposed and implementation commenced. Given the DAP environment it is hoped that this will result in a time saving for algorithms in the range 32 < n < 64. A modified conjugate gradient algorithm should also be effective for problems with large n. These problems have been reported at:
- Queen Mary College; DAP Unit Optimisation Users Group meeting June 1981
- SERC Distributed Computing Panels meeting at the University of Sussex June 1981
- CREST/CNR Summer School on Design of Numerical Algorithms for Parallel Processing, University of Bergamo, June 1981.
Details of this work are written up in  which will appear in the proceedings of (iii).
1. J. J. McKeown, Experiments in implementing a non-linear least-squares algorithm on a dual-processor computer, N.O.C. Technical Report No. 102, 1979.
2. W. L. Price, A controlled random search procedure for Global Optimisation, in 'Towards Global Optimisation 2' L.C.W. Dixon & G.P. Szego (Eds), North Holland, 1978.
3. A. A. Torn, A search-clustering approach to Global Optimisation, (Ibid).
4. J. Gomulka, A user's experience with Torn's clustering algorithm, N.O.C. Technical Report No. 90.
5. L. C. W. Dixon and G.P.Szego, The Global Optimisation Problem: an introduction, in 'Towards Global Optimisation 2' L.C.W. Dixon and G.P Szego (Eds), North Holland, 1978.
6. J. J. McKeown, Aspects of Parallel Computation in Numerical Optimisation, Technical Report 108, January 1980.
7. J. J. McKeown, Simulation of a Parallel Global Optimisation Algorithm, Technical Report 109, March 1980.
8. L. C. W. Dixon, Exact Penalty Function Methods in Non-linear Programming, N.O.C. T.R. No 103, February 1979.
9. L. C. W. Dixon, The Place of Parallel Computation in Numerical Optimisation I, The Local Problem, CREST/CNR Summer School on Design of Numerical Algorithms for Parallel Processing, University of Bergamo, June 1981.
DR J M BACON
AN INVESTIGATION OF SOFTWARE STRUCTURES TO FACILITATE DISTRIBUTION
June 81 - May 83
Background and Nature of Research
This project is to explore the extension of well-established principles for the design of single-site operating systems and concurrent programming languages to local-memory based distributed systems.
To establish a working design methodology for concurrent software. To evaluate alternative mechanisms which may be provided for concurrency, structuring and protection and to extend the associated system structures to accommodate a distributed system and to integrate communications handling into the overall design.
To study distributed resource allocation. To implement and evaluate simple, basic models on a small system rather than a full production system.
A five-site, distributed system has been implemented based on a soft token ring. Communications software follows the ISO reference model.
Two kernels have been implemented which provide asynchronous and synchronous message passing. Only the message passing primitives differ in the two kernels and process level software will run on both. Software to monitor communications traffic has been developed so that one site may be used to initiate and time evaluation tests running on other sites.
Tests have been carried out to measure the overhead of message buffering and consequent reduction in time available for running concurrent processes in such a system.
Work was started to simplify the communications handling software to exclude low level checking and re-transmission and to measure the consequent increase in performance. This work has subsequently been transformed to a more realistic hard ring based system.
Process level software for file service, printing service and device handling has been developed and has provided a basis for the study of the design of concurrent software and dynamic resource allocation in a distributed environment.
The work relies heavily on MSc and BSc project work for implementation manpower, involving a number of sandwich and part-time students. A primary application of the system is educational. For further information contact Jean Bacon.
DR L C W DIXON
THE ADVANTAGES OF A PARALLEL COMPUTATION FACILITY FOR GLOBAL OPTIMISATION
Jan 81 - Dec 83
Background and Nature of Research
With the advent of relatively cheap central processor units computer systems with many CPU's operating in parallel can be confidently expected to be readily available in the near future. The NOC is investigating how such parallel systems can be used to solve practical optimisation problems (in particular the global optimisation problem) more efficiently.
In our previous research contract (Development of Optimisation Algorithms for Parallel Computation. S E Hersom) it was shown that the solution of four distinct classes of optimisation problems ought to benefit by being undertaken on appropriate parallel systems. It is hoped to demonstrate the benefit on all four classes during this project.
In doing this we hope:
- To investigate further the holistic principle developed at the NOC; applying it to a wider range of algorithms than currently achieved.
- By considering the results of the implementations of holistic Torn & Price algorithms to improve the structure of these algorithms taking more account of the different type of computer on which they are being applied.
- To investigate other ways of introducing parallelisation into optimisation algorithms particularly by investigating the possibility of (a) a parallel line search algorithm, or (b) the use of parallel line searches, (c) the construction of more complex function models given the new facility.
During the year considerable use has been made of two parallel computing systems namely, the SERC DAP system at Queen Mary College, London University and the SERC NEPTUNE system at Loughborough University. The parallel Newton method has been successfully implemented on the DAP system showing considerable time savings and allowing research to be undertaken including parallel calculation of gradients and Hessians, a parallel line search algorithm and a 4dimensional parallel sweep. All showed the expected gains in efficiency.
Following the success of the parallel Newton method, this work was extended to look at the solution of partial differential equations via finite elements and optimisation. This has now been successfully implemented and tested for linear systems and great time savings were possible using our parallel implementations.
By using the least squares finite element formulation any non-linear partial differential equation can be solved by posing it as a non-linear optimisation problem and applying a non-linear pre-conditional conjugate gradient method. This approach maps ideally onto the DAP system and has been successfully implemented. The possibility of implementing parallel versions of Schlagen's method and of the Mexican tunnelling algorithm was investigated but abandoned when weaknesses were discovered in both.
Implementations of the holistic Price method have been successfully undertaken on both the DAP and NEPTUNE systems.
Work in Hand
The extension of the work on the parallel finite element/optimisation approach to non-linear partial differential equations is in hand; a truncated Newton method is in course of implementation.
The people working on this project are: L C W Dixon, K D Patel (SERC Research Assistant) and P G Ducksbury (SERC Research Student).
It is anticipated that the software developed under this project will later be as valuable to industry as the widely used codes currently available. In particular, of the codes so far investigated, it is anticipated that a parallel method for solving non-linear partial differential equations of the type being developed will prove very valuable. Mr D Hunt of ICL has agreed to act as Industrial Supervisor for the above project being undertaken by P.G.Ducksbury.
1. L. C. W. Dixon and K. Patel, The Place of Parallel Processing in Numerical Optimisation IV: Parallel Algorithms for Non-linear Optimisation, NOC TR 125 Feb 1982. Presented at Symposium on Vector & Parallel Processors, organised by IBM Rome, March 1982.
2. P. G. Ducksbury, The Implementation of a Parallel Version of Price's Algorithm on an ICL DAP, NOC TR 127 May 1982.
3. K. Patel, Implementation of a Parallel (SIMD) Modified Newton Algorithm on the ICL DAP, Presented at Progress in the Use of Vector and Array Processors, Bristol University, August 1982.
4. K. Patel, Parallel Computation and Numerical Optimisation, NOC TR 129, Presented at the International Institute on Stochastis and Optimisation, University of Milan, September 1982.
5. L. C. W. Dixon, P. G. Ducksbury and P.S ingh, A Parallel Version of the Conjugate Gradient Algorithm for Finite Element Problems, NOC TR 132, December 1982.
6. Papers presented at lhe EURO VI Conference in Vienna, August 1983; The IFIP Symposium, Copenhagen 1983; and The IFIP Workshop, Gargana 1983, will be available shortly.
DR J A GORDON
SECURE HIGH-SPEED DATA TRANSMISSION
March 83 - Feb 86
The aim of this programme is to develop and prove the strength of a crypto-graphically secure technique for data transmission at Megabit rates.
At the present time there is a rapidly growing commercial awareness of the need for the protection of the privacy of data transmitted over public networks. The driving forces behind this interest may be traced to, among others., the following sources:
- Public pressure, especially in the USA over the issue of personal privacy. This pressure has resulted from concern regarding the control of the flow of information between data bases set up to store medical, criminal, financial and other records on private citizens., and concern of the degree of vulnerability of such data in the hands of possibly unscrupulous computer personnel. These matters have recently been the subject of study of the Lindop Committee on Data Privacy in the UK.
- The growing threat of computer crime whereby, for example, well informed but unauthorised groups of experts have been able to manipulate funds to their gain. Here they take advantage of the non-conservative nature of money, which is actually a form of information, and which may, contrary to Statute Book law be both created and destroyed - it being the single main task of our financial institutions to attempt, by imposition of accountancy practice, to force upon money a conservative property. In this area the notion of stealing data is particularly ironic.
- The likelihood in the medium term future of the proliferation of various forms of electronic mail, and the need for public reassurance regarding the privacy of communication.
- The growing tendency towards the dispersed, electronic society - a society in which business is conducted between organisations and individuals in widely separated geographical locations, who are nevertheless able to carry out coherent, coordinated tasks by means of sophisticated telecommunications systems.
- The business community will no doubt be happier to see progress in this area if it can be assured of the security of its proprietary information against eavesdropping from rivals.
- The explosive growth over the last decade of the use of computer networks.
- The expected proliferation of point-of-sales terminals and electronic-funds-transfer.
- The introduction by (for example) PTTs of specialised satellite facilities for business.
This interest in the need for data protection has had several results, among them:
- In the USA the National Bureau of Standards has adopted a Data Encryption Standard (DES) known as FIPS-46 for the cryptographic protection of data. The use of this cipher has become mandatory for all (non-defence) Federal agencies.
- The British Standards Institute is currently meeting to decide upon the attitude of the UK towards the standardisation of encryption techniques.
- The International Standards Organisation has invited member countries to submit proposals for discussion regarding the standardisation of cryptographic techniques.
- The academic community, especially in the USA has begun to take an interest into the information theoretic aspects of cryptography, and many novel techniques have appeared in recent issues of learned journals. Among these techniques are the various Public Key Crypto-systems, and various techniques for key exchange and for authentication.