# Home

### Linear and Non-linear Programming

#### 1971

In 1945 the American economist G. Stigler investigated the cost of a subsistence diet (Stigler, 1945). He knew the cost and composition of the foods available and he knew the minimum nutritional requirements for a physiologically adequate diet. The problem was to find a minimum cost but a nutritionally satisfactory diet. Stigler showed that man could live at a cost of only \$96 a year (in 1945) on a diet of 370 pounds of wheat flour, 57 cans of evaporated milk, 111 pounds of cabbage, 25 pounds of spinach and 285 pounds of dried navy beans! Although the solution is more entertaining than realistic and no doubt appeared so at the time the problem was thought interesting. Later it was shown that there was an adequate diet which cost even less. The result was obtained by formulating and solving the problem as one of linear optimisation.

A general linearly constrained optimisation program can be expressed as

```   max f(x) subject to Ax >< b,   0 ≤ x ≤  d   (1)
```

where >< means an assortment of inequality and equality constraints, A is a m × n matrix, b a m-vector, x and d n-vectors. That is, maximise an objective function, f(x), of the non-negative variables, x, subject to the linear constraints Ax >< b. Also the variables are constrained by upper bounds, where d ≥ 0. Linear programming is the special case when the objective function and the constraints are all linear. Since 1948 constrained optimisation models of the form (1) have been used extensively in many situations. A few of the areas in which they have been used are capital budgeting, military planning, ship scheduling, production planning, designing the composition of animal feeds and oil mixtures, town planning and engineering design. The aim of using such models is the more effective use of the available resources. The growth of these uses has been determined essentially by two factors; one, the development of the mathematical tools and hence of algorithms to solve particular problems and secondly the increase in computing power means that it is now feasible to solve problems with thousands of variables and linear constraints. The continued and expanded use of these optimisation models means that there are potentially great benefits to be made from developing more effective algorithms to solve these problems.

A great deal of ingenuity is used to devise algorithms to solve such problems, but the only way to test the effectiveness of these ideas is to write a computer program or modify an existing one to perform the calculations. There are a number of programs published in Algol or Fortran with varying degrees of development. The commercial codes to solve large linear programs are sophisticated and efficient but largely inaccessible to a research worker. So often the only means for an author to test his ideas is to write his own computer programs. There are many pitfalls, into some of which the writer of even the simplest program will fall; many of these will probably happen before he has even reached the stage of experimenting with his proposed algorithm. However, there is no doubt that the person most likely to have the enthusiasm to develop the programs is the author of the idea himself. A major advance in the solution of programs with linear constraints was the development of the Simplex method by G. Dantzig. The Simplex method is the most effective known method of solving linear programs. Thus in many cases a research worker with ideas on the problem (1) needs a basic, reliable Simplex system which he can get into and adapt to his requirements. This need was often expressed at the 1970 Mathematical Programming Symposium at the Hague.

A suite of computer programs have been developed by Dr. A. Land, of the London School of Economics, and I for just this purpose. The suite consists of programs to solve linear, parametric, quadratic and integer programs; the algorithms that are used are all Simplex based. The programs do not claim to be efficient or elegant but they do claim to be reliable. They have been extensively tested and we are confident that we have programmed sufficient accuracy checks and re-inversion procedures so that a solution obtained is a true solution. The suite is not intended as a Mathematical Programming package; it is intended as a research tool to enable a research worker to test his own ideas. Time comparisons with commercial codes are impossible, but a count of the basis changes, or iterations may be good enough to assess the comparative efficiency. Also the programs are available on the Atlas computer, and will be on the 1906A computer, so that they can be used by anyone with a problem.

The programs are entirely written in Fortran IV and are all-in-core routines. The programs are extensively described in a report (Land and Powell, 1971) which it is intended to have published as a book. In the report there is a listing and description of every routine and also the use of every Fortran variable is given. These detailed descriptions are to enable the routines to be easily available and hence used. The remainder of this note is devoted to a description of the programs in the suite.

#### Linear programming

The base program upon which all the others are built is designed to solve the linear program:

```   max c'x   subject to Ax >< b,   0 ≤ x ≤  d   (2)
```

where c is an n vector, A, band d are as before.

That, is, the program maximises a linear function subject to linear constraints. The feasible region is the n dimensional space defined by the constraints. Any point which satisfies all the constraints lies within the feasible region. A program is said to be infeasible if there is no such point.

A program in 2 variables can be represented graphically, Figure 1, where the direction of increasing value of the objective function is indicated by the arrow.

##### Feasible Region

It can be proved that a linear function will maximise at a vertex of the feasible region. The Simplex method systematically inspects some of the vertices until the one that maximises the function is found.

In the suite the version of the Simplex method that is used is a reduced inverse method; it is not the product form algorithm. The upper bound constraints are handled implicitly and only the non-zero elements of the A matrix are stored. The treatment of the constraints is such that it is possible to add and delete constraints, in bunches or singly, to and from the A matrix and b vector.

There are two computer programs to solve a linear program (2). The simplest version does not check the accuracy of the solution at the end of the calculation; this version has been found to be quite adequate for modest sized problems. The alternative version checks the accuracy of the solution and will, if necessary, attempt to re-invert the basis and then re-enter the main algorithm to re-optimise. Either of the programs and the subroutines which they call are intended to be embedded in a much more complex algorithm and are thus able to optimise a linear function either starting from a pre-existing basis, feasible or infeasible, or from the initial data of the problem.

Throughout all the programs tolerances are used to decide whether a small value is to be treated as zero or not.

#### Parametric programming

Economically, solving a linear program can be viewed as finding the levels of the n activities which maximise a profit function c'x subject to the limited availability of resources. The profit that accrues from the jth activity at the level xj is cjxj where cj is the profit from the jth activity at the unit level.

states that the total amount of the ith resource that is used, must be less than that available, bi. It often happens in production planning that the exact amount of a resource that will be available is unknown but it is known that the amount will be within a specified range, bimin to bimax. Then one would like to know how the solution and the value of the objective function will alter as an element of b varies. Similarly it is useful to know how the solution will alter as an element of the objective function, cj, varies. These calculations are performed by parametric linear programming.

##### Linear Program

A program in 2 variables can be represented graphically, Figure 2.

The base program to solve a linear program has been extended so that it is possible to perform a parametric investigation of a single element of b or c. The exploration continuously varies the value of the element within a range above and below the initial value. By successively entering the program every element in b and in c may be varied.

#### Integer programming

In many problems, for instance ship scheduling or air-crew time-tabling, as well as there being the linear constraints all of the variables are constrained to take discreet values. A pure integer linear program is of the form:

```   max c'x   subject to Ax >< b,   0 ≤ x ≤  d,  x integer   (3)
```

Thus the feasible points are constrained to be the integer lattice points within the feasible region defined by the linear continuous constraints.

A program in 2 variables can be represented graphically, Figure 3.

##### Integer Program

The graphical representation of the problem makes it appear deceptively easy. Although the lattice points form a regular grid their relationship to the problem constraints is unknown.

In the suite of programs there are two cutting plane algorithms and a branch-and-bound algorithm to solve a pure integer linear program. None of these algorithms demand that the coefficients of A, b or c be integer. Without the integrality constraint the program (3) can be solved as a linear program. Only in exceptional circumstances is the linear program optimum the optimal integer solution, but it is likely that the optimal integer solution will be near the linear program optimum. All of the algorithms attempt to use this nearness and start by initially solving the linear program.

#### Cutting Planes

A cutting plane algorithm reduces the feasible region by adding more constraints, or cutting planes, to the original constraints. The cutting planes cut-off small areas of the feasible region which do not contain any integer points.

A cutting plane is a linear function with integer coefficients which maximises at the linear program optimum with a non-integer value. Thus p'x, where pj j=l,...,n are integer, maximises at the linear program optimum with a value I + f, where I is an integer and f is a positive fraction less than 1. Thus p'x ≤ I+f.

This can be cut to

```p'x ≤ I                                        (4)
```

The cutting plane (4) reduces the feasible region by cutting off the linear program optimum without cutting off any feasible integer points. The larger is the ratio f/I the deeper is the cut. The reduction of the feasible region by a cutting plane is illustrated in Figure 4 for a program in 2 variables.

##### Cutting Plane

The cutting plane is added, as a less-than-or-equal constraint, as an additional row of all integer coefficients to the A matrix and b vector. The new linear program is solved and if the solution is non-integer another cutting plane or group of cutting planes is generated. After each optimisation any non-effective added constraints are removed. This process can be repeated until an optimal linear program solution is integer.

The first cutting plane algorithm is essentially a straightforward application of Gomory's Method of Integer Forms (Gomory, 1963) except that at each successive linear program optimum not one cutting plane but one from each row of the (reduced) inverse is added. In the alternative cutting plane algorithm the intention is to keep the coefficients pj and hence I as small as possible so as to make the cut as deep as possible. The cutting plane is derived from the scaled down sum of the effective constraints at the linear program optimum.

The alternative algorithm is more successful than Gomory's algorithm but neither has been as effective as the branch-and-bound algorithm below.

#### Branch-and-bound

The branch-and-bound algorithm is a version of the Land and Doig (1960) algorithm. At the linear program optimum the program is partitioned into a series of subprograms. The partitioning, or branching, takes place by fixing a variable at alternative values. Thus a subprogram is itself an integer linear program but differs from the original program in that a variable has been fixed at a value. By dropping the integrality constraint a subproblem can be solved as a linear program and then can itself be partitioned into another level of subprograms by fixing another variable at alternative values. This process can be viewed as the development of a tree, Figure 5.

##### Branch-and-Bound

In the computer program because of the difficulty of storing all these subprograms the calculation proceeds down one branch to its end and next backtracks to the nearest unfinished branch and continues down it. A branch is finished if either the solution to the linear program is infeasible or integer or has a value of the function worse than the known best. The initial best value of the objective is an arbitrarily low value which is improved as integer solutions are found. The rule followed for determining upon which non-integer variable to branch is that suggested by Little et al. (1963) so that the branch which is not followed initially is the one with the poorest value of the function.

This algorithm can also be used when only some of the variables are required to take discrete values.

The development, by H. Kuhn and A. W. Tucker, of the necessary conditions that the solution to a non-linear program must satisfy soon led to the construction of algorithms to solve non-linear programs. A comparatively simple non-linear program is a quadratic program of the form:

```   max c'x+½x'Dx   subject to Ax >< b,   0 ≤ x ≤  d             (5)
```

where D is a n × n matrix. The program arose because the use of a linear objective function is an over simplification if there is a joint profit which accrues when two activities are performed. Thus as well as profits that are directly proportional to the level of the individual activities there is a joint profit so the total profit is

```cjxj + cixi + ½(dij + dji) xjxi
```

In the program suite there is a version of Beale's Algorithm (Beale, 1967). This will find the optimal solution if D is negative definite, otherwise only a local optimum will be found. At each basis change the algorithm recomputes the linear function to be maximised. If the basis is feasible and its function value lower than that at the previous basis an auxiliary constraint is added to the A matrix and b vector. These constraints do not ultimately restrict the feasible region as their dual values are only permitted to be zero and at each basis any ineffective auxiliary constraints are deleted.

The algorithm differs from Beale's in the handling of the auxiliary constraints and in when they are added. The algorithm is more effective than Beale's in that it takes fewer iterations to solve a series of test problems (Land, 1972).

Both Dr. Land and I have found that the programs are readily adaptable to our needs and we hope very much that these programs will be developed by others with ideas that they wish to test upon the computer. We would be very pleased to hear of such developments.

#### References

1. Beale, E. M. L. (1967). Nonlinear Programming, ed. Abadie, J., North Holland Publishing Co., 143-163.

2. Gomory, R. E. (1963). An algorithm for integer solutions to linear programs, Recent Advances in Mathematical Programming, ed. Graves, R., and Wolfe, P., McGraw-Hill.

3. Land, A. H. (1972). An inverse basis method for Beale's quadratic programming algorithm, Management Science (to be published).

4. Land, A. H., and Doig, A. (July 1960). An automatic method of solving discrete programming problems, Econometrica, 28(3).

5. Land, A. H., and Powell, S. (1971). Fortran programs for linear and non-linear programming. The report is available on request from the author at Atlas Computer Laboratory.

6. Little, J. D. C., Murty, K. G., Sweeney, D. W., and Karel, C. (1963). An algorithm for the travelling salesman problem, Operations Research, 11.

7. Stigler, G. (1945). The cost of subsistence, Journal of Farm Economics XXVII. For impecunious readers!