# Jump over left menu

### Chapter 6: Practical Problems

#### 6.1 THE CHECKERBOARD

Probably the best known problem on the 8 × 8 checkerboard is whether it is possible to place 8 queens in such a way that no queen can capture any other queen. This was first posed around 1848 when, lacking computers, a mathematical proof was able to show that there are 12 such distinct patterns (32), only one of which (Fig. 6.1) has a rotational symmetry:

##### Fig. 6.1 The symmetrical solution to the eight queens problem

The total number, including reflections and rotations, is therefore (11 × 8) + (1 × 4) = 92.

The problem has also been solved for boards other than side = 8, thus:

SIDE (N) | NUMBER OF DISTINCT PATTERNS |
---|---|

4 | 1 |

5 | 2 |

6 | 1 |

7 | 6 |

8 | 12 |

9 | 46 |

10 | 92 |

11 | 341 |

12 | 1787 |

13 | 9233 |

14 | 45752 |

15 | 285053 |

16 | 1846955 |

but a general formula for N has not been found.

At first sight the problem of placing N rooks on a board of side N is easier than placing N queens. Including reflections and rotations the answer is N!, but no one has yet (to my knowledge) obtained the number of distinct patterns for even the ordinary 8 × 8 board.

The crude approach to solving the problem of placing sets of like chess men on a board is as follows:

Let N = 4. The notation for the board is

##### Notation for the board

Instead of having a list of squares a piece can move to, it is better to represent the moves as a bit pattern. Therefore the queen problem would use the table below.

Call this array q[l:13]. The pseudo-Algol program could then be

for i := 1 step 1 until 2 do for j := first zero bit in q[i] step 1 until 8 do for k := first 0 bit in q[i] and q[j] step 1 until 12 do if q[i] and q[j] and q[k] ≠ q[13] then answer;

QUEEN IN SQUARE |
CAN MOVE TO | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | |

1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 1 |

2 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 |

3 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0 |

4 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 1 |

5 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 |

6 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 0 | 1 |

7 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 0 |

8 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 |

9 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 0 | 0 |

10 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 0 |

11 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 1 | 1 | 1 |

12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 1 | 1 |

13 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |

Actual programs which have been written are far more sophisticated than the above but follow the broad outline.

The problem becomes much more difficult when a three-dimensional board is used. Although the program just described could attempt the problem of 64 queens on the 8 × 8 × 8 board the time taken would be prohibitive. In a study made by the author (33) the following problem was solved.

Given 3 × R rooks, how many ways can they be placed on a 3 × 3 × 3 board such that there are only R rooks on each row, column, and (looking down through the board) tube? More than one rook may occupy a cell. Including reflections and rotations the answers were

R | NUMBER OF WAYS |
---|---|

1 | 12 |

2 | 132 |

3 | 847 |

4 | 3921 |

5 | 14506 |

6 | 45402 |

and the general formula is

{R (R+ 1)[R(R+ 1)[R(R+ 1)[31R(R+ 1)+ 1004] + 6820] +4272] +1} / 4032

Whenever the question *How many different ways can something be done* is
asked, then the subjects of Combinatorial Mathematics and Diophantine Analysis are
involved. Games are a rich source of such problems and, in many cases, the computer
is used to verify or clarify some theory or idea. An example of such a problem was set
by J. McCarthy to his class of students at the Stanford Computation Center in
California. It asked *How many different ways can a N by N checker board be cut
into two congruent halves?*
(Cutting must be along grid lines and rotations, reflections are not included.)
The problem is conceptual. Just how can a program systematically produce all the
distinct *positions*? A program written by
B. Fowler (34) is a beautiful example of how good programmers think. It worked as
follows: *Start at the centre, move one space, and then turn right. If you bump
into a previous path then back up one space, turn left 90 degrees, and continue.
A solution is obtained when the edge of the board is reached; score it, then back up
one space, turn left, and continue.* The answer is reached when the program finds
the direct route to the edge of the board.
This program finds all solutions for even-order boards.
For N = 4 it would find the 6 solutions in the order shown in Fig. 6.2.

##### Fig. 6.2

The answer for N = 8 is 92,263. Odd-order boards, with the centre square missing, are more difficult to program. At the moment no answer for N > 8 has been found.

To return to actually playing games on the checker board. There are a number of games related to draughts and chess that are of such simplicity that they have been solved. These games then have algorithms and, as such, are only problems to an expert player.

##### Fox and geese

This game is related to draughts and played on the black squares of an 8 × 8 board. One piece (the fox) behaves like a king, except that he cannot capture. The opposing 4 pieces (the geese) behave like draughts men, i.e. they can only move forwards. Fig. 6.3 shows a possible position.

##### Fig. 6.3

The arrows indicate the possible moves of the two sides. The objective of the fox is to break through the line of geese; the objective of the geese is to force the fox back, and eventually trap him against a side. The value of the game is a win for the geese. It is immaterial how long the board is (only how wide it is) and they can, in fact, force the fox back indefinitely.

Part of the algorithm to play this game is

- Always try to play the geese in the order A,B,C,D,D,C,B,A,A, etc., following the arrows shown in Fig. 6.3.
- The fox will attempt to frustrate this ordered advance by blocking. When he does so there will be a goose (A or D) furthest from him. This goose (A or D) must now move towards the fox; this may result in it crossing over and changing places with its partner (B or C). However, the geese A,B (C,D) are not allowed to cross from the left (right) half of the board to the right (left) half.

The position can now become complicated. Six distinct patterns must be recognised (with templates), and the correct response made. The detection of whether or not the fox is blocking the ordered advance is also done by templates. The geese can usually return to the straight line pattern and advance systematically again, until the next block. It is left to the reader to build up the full algorithm from the two fundamentals just given.

If the game is played on a 4 × 4 board (2 geese versus the fox), then the fox can always win if he moves first. Fr larger boards and number of geese > 2 the fox should always lose. The larger the board (N × N) the more trivial the game becomes for the N/2 geese. An, as yet, unsolved game is 5 geese against 2 foxes on a 10 × 10 board.

##### Maharajah

This is a similar game, but uses chess men. One side (black) has the normal set of 16 men arranged as usual. The other side (white) has only one, the Maharajah, who combines the moves of a queen and a knight. The objective of black is to stifle or capture the Maharajah; the objective of the Maharajah is to capture the black king.

Because pawns are not allowed to queen and also the loss of any black man is critical, the game is very similar to Fox and Geese, i.e. given an infinitely long board, can black force the Maharajah back indefinitely? The answer is yes and the algorithm is much simpler than for Fox and Geese. First get the men into the formation shown at the top of Fig. 6.4. It is not difficult to do this in 19 moves without the Maharajah being able to capture any man. Advance the pawns P1,3,2,4,5,7,6 first (N.B. if, after moving P7, the M*R2 then N-KB3 stalemate [see Addendum]), then the pieces in the order Q,K,P8,N,N,B,B. The knights require three and four moves each. Then move the men, in the order shown, through a 20-move cycle. If move 14 is not possible then gain a tempo by moving one of the rooks up one square or interchange with move 15. The position of the Maharajah only effects a choice at move 17. If black should ever have the opportunity to capture the Maharajah then he should do so, but to detect this would complicate the algorithm which, basically, is unaffected by his movements. We assume the Maharajah never puts himself into check.

##### Fig. 6.4 The 20-move cycle which safely advances the chess men against the Maharajah

Having shown that black can always win, the remaining problem is how should a program play the Maharajah against an opponent who, hopefully, does not know a winning algorithm? In this case it is necessary to list the legal moves of both sides and place the Maharajah where he controls the most squares containing enemy men. He is initially restricted to making queen moves only. The instant he gets a chance to safely capture any man (even with a knight-type move) he should take it. Humans first playing this game are surprisingly blind to the possibility of a knight move, and the balance of the game is so delicate that once a black man is lost the program can play havoc without the necessity of a more explicit plan of attack. Finally note that the Maharajah can always give checkmate to a lone king.

##### Addendum

It is difficult to avoid stalemate with a cyclic algorithm; however, should M*R2 in the opening moves, it is possible to finish more elegantly by capturing the Maharajah thus:

P-K4 leaving the maharajah with two moves M*RP M*N and black's two mating sequences are N-R3 Q-R5 QN-K5 N-Q2 Q-K2 (tempo) Q-R3 Q-B3 ck Q-N2 mate. Q-N2 mate.

#### 6.2 GRUNDY'S GAME

In the course of this book the reader will have been subjected to the jargon of game playing. Concepts such as value of the game, sub-goal, ply, template, minimax, a.s.o., have been discussed. A good way to demonstrate these fundamentals is to choose a game which has been solved in its simpler forms.

Grundy's game (35) is related to Nim. Again, at the start, there
is a heap (or heaps) of matches, but this time the two players alternate in dividing
any single heap into two unequal heaps.
Again there are two versions. In the ordinary game the winner is the last player to
make a division, i.e. he leaves all the heaps with either one or two matches in each
of them. In the *misere* game the winner is the first player who is unable to
play.
Consider a single heap of 8 matches. The opening player can generate the *directed*
graph shown in Fig. 6.5. By minimaxing from the terminal positions it is possible to
find the winning path and opening move for both the ordinary and *misere* game.
These paths are shown in Figs 6.6 and 6.7.

##### Fig. 6.5 Generated graph for 8 matches

##### Fig. 6.6 Winning path. Normal game

##### Fig. 6.7 Winning path. Misere game

The value of the game for 8 matches is, therefore, a win for the opening player in either version. An important point to note is that, in making this analysis, the values of all games starting with less than 8 matches have also been obtained. These positions are therefore sub-goals and, once stored away with the related winning move, need not be analysed again. From now on, as we analyse larger and larger opening piles, the program can apply skeleton templates (2's and 1's are ignored) to the heaps as they are broken down. The results for all heaps of 12 matches or less are shown in the following table. The entries show a correct winning move to apply to a heap of that number, a blank entry means the heap is a losing position.

Number of Matches |
Value | Move Ordinary |
Value | Move Misere |
---|---|---|---|---|

1 | L | - | W | - |

2 | L | - | W | - |

3 | W | 2,1 | L | - |

4 | L | - | W | 3,1 |

5* | ? | 4,1 | ? | 3,2 |

6 | W | 4,2 | L | - |

7 | L | - | W | 6,1 |

8* | ? | 7,1 | ? | 6,2 |

9 | W | 7,2 | L | - |

10 | L | - | W | 9,1 |

11* | ? | 10,1 | ? | 9,2 |

12 | W | 10,2 | L | - |

The ? indicates numbers which are winning positions for either version of the game. These numbers (5,8,11) are very important as will be seen.

To show how these sub-goal evaluations can now be extended to larger heaps consider the case of 13. This breaks down into

ORDINARY | MISERE | |
---|---|---|

12 and 1 | W and L | L and W |

11 and 2 | ? and L | ? and W |

10 and 3 | L and W | W and L |

9 and 4 | W and L | L and W |

8 and 5 | ? and ? | ? and ? |

7 and 6 | L and W | W and L |

Note that ? can be either a W or a L.

To calculate a winning opening move in the ordinary game we want to split the piles into two heaps such that the sub-goal values for each is either (W and W) or (L and L). The only two answers are 11,2 and 8,5. But the opponent must never be given an odd number of ? heaps, because he then has control of which way the game goes; so the only answer is 8,5. The opponent must then break one of the ? heaps, leaving you with the control.

The misere game is much easier from this position. A winning move is to split the piles in such a way that the two heaps have sub-goal values of different sign, i.e. W and L or L and W. Therefore any move except 11,2 is a winner in the misere game.

There are usually a number of winning moves from a particular position. The above analysis has been made to discover the value of the game for heaps up to 30 matches, and one of the winning moves has been discovered thus:

Number of Matches |
Value | Ordinary | Value | Misere | |
---|---|---|---|---|---|

13 | ? | 8,5 | ? | 8,5 | |

14 | ? | 10,4 | ? | 12,2 | |

15 | W | 12,3 | L | - | |

16 | ? | 11,5 | ? | 11,5 | |

17 | ? | 10,7 | ? | 15,2 | |

18 | W | 15,3 | L | - | |

19 | ? | 11,8 | ? | 11,8 | |

20 | ! | L | - | L | - |

21 | ? | 20,1 | L | - | |

22 | ? | 20,2 | ? | 20,2 | |

23 | ! | L | - | L | - |

24 | ? | 23,1 | ? | 23,1 | |

25 | ? | 23,2 | ? | 23,2 | |

26 | ! | L | - | L | - |

27 | ? | 26,1 | ? | 26,1 | |

28 | ? | 26,2 | ? | 26,2 | |

29 | ? | 13,16 (5,8,5,11) | ? | 13,16 | |

30 | ? | 26,4 | ? | 26,4 |

Note how the critical numbers 5, 8, and 11 keep appearing. The results also indicate that if the game is played beginning with a pile of 20, 23, or 26 matches, then it is the second player who should always win in either version (!). The game is a good demonstration of how sub-goal evaluation can be backed up for selecting a move. For example, from the table, a pile of 50 matches could be split 28,22 by the opening player for either version.

#### 6.3 SNAKES AND LADDERS

(This work was done by F. R. A. Hopgood (20).)

##### Rules of game

- There can be any number of squares on the board, but 100 is the most usual.
- A set of L pairs of squares define the tops and bottoms of ladders. Let us write these (TLi,BLi) for i = 1 to L.
- A set of S pairs of squares define the heads and tails of snakes. Let us write these (HSi,TSi) for i = 1 to S.
- It is not possible to have BLi = HSj.
- The game is played by throwing a die and advancing a piece by the resulting number of squares. The piece starts at square 0. The aim is to reach square 100 exactly.

If the piece reaches square P, then:

- if P = BLi, for some i, then piece is moved to square TLi.
- if P = HSi, for some i, then piece is moved to square TSi.

Note: There is no difference between a snake and a ladder except TLi > BLi, and HSi > TSi for all i. If TSi > HSi then it is a ladder.

##### Statement of problem

Given a particular board it is necessary to find

- The most likely number of moves needed to reach square 100
- The average number of moves needed to reach square 100
- The least number of moves needed to reach square 100.

These questions are important because it is possible to design a board which has no solution, or a board which usually takes too long and is therefore unreasonable as a child's game.

The game could be analysed by Monte Carlo methods, i.e. we write a program to simulate a large number of games with the die replaced by a random number generator. However, the game is completely determined and a more analytical approach will be adopted.

Let us define P_{i}
^{j} as the probability that the
piece will be on square i after j moves.
Initially

P_{0}
^{0} = 1 and P_{i}
^{0} = O
for i = 1 to 100.
Assuming no snakes or ladders, then

P_{i}
^{n+1} = {
P_{i-1}
^{n} +
P_{i-2}
^{n} +
P_{i-3}
^{n} +
P_{i-4}
^{n} +
P_{i-5}
^{n} +
P_{i-6}
^{n} } /6

and the first two moves would generate (in squares 1 to 12)

SQUARE | ||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|

1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | |

MOVE 1 | 1/6 | 1/6 | 1/6 | 1/6 | 1/6 | 1/6 | ||||||

MOVE 2 | 0 | 1/36 | 2/36 | 3/36 | 4/36 | 5/36 | 6/36 | 5/36 | 4/36 | 3/36 | 2/36 | 1/36 |

So we see, for example, that P_{7}
^{2} = 1/6.

However, with snakes and ladders

- If any of (i - 1) to (i - 6) is the bottom of a ladder or the head of a snake, then this term is omitted and the probability at the square is zero.
- If square i is the head of a ladder than add
to P
_{j}^{n+1}the probability P_{j}^{n+1}(where j is the bottom of the ladder). P_{j}^{n+1}will be set to zero. - If square i is the end of a snake then add to P
_{i}^{n+1}the probability P_{j}^{n+1}(where j is the head of a snake). P_{j}^{n+1}will be set to zero.

This can be defined in a program as follows: BDSIZE is the size of the board. AR is a scratch array. The set of relations defining probabilities will be written as

P_{i}
^{n+1} = 1/6 (
AR[1,j,1]P_{AR[1,j,2]}
^{n} +
AR[2,j,1]P_{AR[2,j,2]}
^{n} + ..
AR[i,j,1]P_{AR[i,j,2]}
^{n} )

where the number of terms (LENGTH[i]) depends on how many heads of snakes or bottoms of ladders appear in the previous 6 board positions.

Defining the complete set of snakes and ladders by

integer array SNAKLAD[1:2, 1:BDSIZE];

then SNAKLAD [1:j] is the bottom of a ladder or the head of a snake, SNAKLAD[2,j] is the related top of ladder or tail of snake. The recurrence relations can then be defined by:

for i := 1 step 1 until BDSIZE do begin LENGTH[i] := 6; for j := 1 step 1 until 6 do begin AR[i,j,1] := 1; AR[i,j,2] := i - j; if i-j < 0 then AR[i,j,1] := 0; end; end;

The effect of the snakes and ladders is then added by:

for i := 1 step 1 until NO SNAKLAD do begin K := SNAKLAD[l,i]; M := SNAKLAD[2,i]; L := LENGTH[M]; N := LENGTH[K]; LENGTH[M] := L+N; LENGTH[K] := 0; for j := 1 step 1 until N do begin AR[M,j+L,1] := AR[K,j,1]; AR[M,j+L,2] := AR[K,j,2]; end; end;

At the top of the board the last square must be reached exactly, otherwise the piece stays where it is. This modifies the recurrence relations as follows:

for j : = 1 step 1 until 5 do begin i := BDSIZE - 6 + j; if LENGTH[i] = 0 then go to PX; L: = LENGTH[i]; AR[i,L+1,1] := j; AR[i,L+1,2] := i; LENGTH[i] := L + 1 ; PX:end;

Once the recurrence relations have been defined then we can initialise the probabilities at each square of the board to

P[O] := 1 ; for i := 1 step 1 until BDSIZE do P[i]:= 0;

The probabilities, after a move, can be defined in Q[i] as follows:

Q[0] := 0; for i := 1 step 1 until BDSIZE do begin Q[i]:= 0; if LENGTH[i] = 0 then goto SS; for j := 1 step 1 until LENGTH[i] do begin - if AR[i,j,1] = 0 then goto SA; Q[i] := Q[i] + AR[i,j,1]*P[AR[i,j,2]]; SA:end; Q[i] :=Q[i]/6; SS:end;

Replacing P[i] by Q[i] completes a single move. Repeating the process a number of times will answer all the questions posed initially. The most likely number of moves is given by the move producing the highest value for P[BDSIZE]. The average number of moves is given when the probability of reaching the top is 50%. The least number of moves is obtained by the first move which generates a value of P[BDSIZE] which is non-zero.

##### Results

The following 100 square board was analysed:

LADDERS | SNAKES |
---|---|

3-20 | 99-63 |

11-28 | 97-66 |

16-35 | 92-25 |

22-37 | 90-48 |

49-67 | 85-59 |

52-71 | 83-45 |

57-76 | 75-13 |

61-78 | 60-23 |

73-86 | 56-1 |

81-98 | 54-36 |

88-91 | 51-6 |

TOTAL RISE = 174 | TOTAL DROP = 507 |

##### Board showing snakes and ladders

Fig. 6.8 can be thought of as a long exposure photograph of games on the board. It shows where the piece is most likely to be at any point in the game, thus giving a measure of the wear each square will undergo. For this board the answers were

- Most likely number of moves = 20 (See Fig. 6.9)
- Average number of moves = 80 (See Fig. 6.10)
- Least number of moves = 8, for example: 3,2,6,6,6,2,5,4.

##### Fig. 6.8, 6.9, 6.10 Board showing where piece is most likely to be at any point in game. Most likely move. Probability in finishing in less than N moves

A simple addition to the rules raises the game from its mechanical level. Each player, at his turn, can either throw the die or move two squares forward. The decision is apparently obvious if one is two squares from the bottom of a ladder or one square from the head of a snake, but this is not always true. For example if the board is

LADDERS | SNAKES |
---|---|

3-100 | 2-1 |

then the best play is not to throw the die at all!

The game can stilI be completely analysed. The program above refers to square 0; it can be
repeated for every square on the board resulting in a measure of how favourable each square is.
The value for the top of a ladder is also placed in the square of its bottom, and vice versa
for a snake. Having computed these values the program compares the *certainty* value
of the square 2 ahead against the average value of the squares 1,2,3,4,5 and 6 ahead, and
chooses the higher. Such a game, especially on well designed boards, should greatly favour a
computer.

Another variant of the game is called *Movable Snakes and Ladders*.
Each player, at his turn, can alter the position of two of his snakes; he is even allowed to
transform a snake into a ladder as already mentioned, but as only his opponents are affected by
his snakes this is an unlikely move. Such an addition to the rules definitely raises the level
of skill required, and it will be interesting to see whether game playing programmers
can develop any algorithms for this version.

#### 6.4 MATHEMATICAL GAMES

Martin Gardner is editor of the Mathematical Games Department which has been a regular
feature of Scientific American since January 1957. Quite often reports appear of mathematicians
using computers to solve problems, especially in the field of combinatorial
mathematics, i.e. does a certain pattern exist and, if so, how many different patterns
are there? One such report concerned *The Magic Hexagon*.

##### The Magic Hexagon (Scientific American, August 1963)

*Statement of problem *

Can the integers 1 to 19 be placed in the 19 cells of Fig. 6.11 such that each of the 15 rows sums to 38?

##### Fig. 6.11

After fifty-two years of study, C. W. Adams sent a solution to Gardner who
was *only mildly impressed*. However, on finding nothing in the literature on
magic hexagons, Gardner sent Adams's hexagon to a combinatorial mathematics
expert, C. W. Trigg. The reply was astonishing. Not only was (a) the pattern unique,
but also (b) no other magic hexagon of any size is possible.

Part (b) requires a Diophantine analysis to show that the equation

9 (n^{4}-2 n^{3}+ 2 n^{2}- n) + 2 / 2(2n-l)

has integral solutions only for n = 1 or 3.

Such generalised problems are difficult to answer using computers; however, the particular case of part (a) is more applicable, and Gardner asked if any reader with access to a computer could verify Trigg's result. A very bad program would be (using the notation of Fig. 6.11)

for A : = 1 step 1 until 19 do for B : = 1 step 1 until 19 do ....... for S : = 1 step 1 until 19 do check all combinations: if A + B + H = 19 and A + C + I = 19 and... and A + D + G + R + S = 19 then answer;

The program does not remove possible rotations and reflections and, also, a
computer would wear out before completing such a nested loop. Before the program is
written a certain amount of analysis is necessary. The reader is invited to verify the
following ( *(A)* means *contents of A* etc.):

If a solution exists, then

Statement 1: The hexagon is completely defined by cells A-G. Statement 2: (A) cannot be 1 or 2. Statement 3: To delete most rotations then we need only consider 8 ≤ (A) ≤ 14. Statement 4: To delete reflections let B always exceed C therefore E must always exceed F. Statement 5: (P) = (G)+(D)+(F) (Q) = (G)+(E)+(D) (K) = (G)+(N)+(E) (C) = (G)+(R)+(N) (B) = (G)+(O)+(R) (J) = (G)+(F)+(O) Statement 6: I ≤ (G) ≤ 8.

From these statements the following program was constructed.

for A : = 8 step 1 until 14 do for B : = 20-A step 1 until 19 do begin if H equal to A or B then go to L1; for C : == 19 - A step 1 until B-1 do begin if I = A or C then goto L2; for G := 1 step 1 until S do begin if G equal to ABCH or I then goto L3; for D := 1 step 1 until 32-B-G do begin if D equal to ABCHI or G then goto L4; for E := 2, 19-D-G do begin if (EJ or L fail or we've had (D+E+G)) then go to L5; for F := 1 step 1 until E -1 do begin if (FK-or M fail or we've had (D + F + G)) then goto L6; if L+P ≤ 19 then print solution; L6:end; L5:end; L4:end; L3:end; L2:end; L1:end;

To see the program *working* the reader can cut out 19 hexagons,
number them, and try to find Adams's arrangement. To actually find the solution start
with A = 9, B = 14, C = 11, and G = 5.

The program took about 20 seconds on an ICL Atlas to find the first solution; it would have taken considerably longer to run to completion and hence prove the uniqueness of the pattern (during which it would find only one other rotation). The program is good practice for solving games (like noughts and crosses) where all possible positions have to be generated before the analysis can be made.

Gardner received four confirmations of Trigg's proof. The shortest to run took only one minute and is obviously better than the above program. A good exercise for student programmers is to make a deeper analysis, then write a better program to give a confirmation in less than 30 seconds on a 1 µsec cycle time computer.

##### Dudeney's magic hexagon (36)

Exercise: There are twelve lines of three cells in Adams's hexagon
(the six sides and the six lines radiating from the centre G). Write a program to
find the two arrangements of the numbers 1 to 19 for which all twelve lines sum to 23.
One of the skills of programming is to balance the time spent in analysing the problem
with the time required by the computer to obtain the answer. Sometimes, if the analysis is
neglected, the computer can be defeated by sheer size. An example of this was the
case of *Euler's Conjecture*.

##### Euler's conjecture (Scientific American, November 1959)

A famous puzzle in Euler's time (1707-83) was to arrange the 16 court cards into a 4 × 4 square such that no row, no column, and no diagonal contained more than one card of each rank (A,K,Q,J) and one card of each suit (H,C,D,S). A solution is given in Fig. 6.12. Let A,K,Q,J = 00,10,20,30 respectively (most significant), and H,C,D,S, = 0,1,2,3 respectively (least significant). Then we have Fig. 6.13 which, expressing the numbers 1-16 in the quaternary system, shows the restrictions more clearly. By adding 1 to each cell and expressing in decimal we obtain a 4 × 4 magic square (Fig. 6.14).

##### Fig. 6.12, 6.13, 6.14

There are 880 unique 4 × 4 magic squares, of which only 72 fulfil the above requirements. Euler studied a number of squares, side N, with these properties. Removing the condition for the diagonals it seemed possible to find solutions only when N ≠ 4K + 2 (K is a positive integer), and this was Euler's conjecture.

Nowadays it is possible to show that Euler was right for K = 1 and N = 6; a computer can enumerate all the possibilities (and find none) within minutes. Hint: to write the program treat the least and most significant parts in two separate arrays A, B, and start the first row and column of A with 00, 01, 02, 03, 04, and 05. Now consider the case of K = 2 (N = 10). To give a visual picture of the problem: We wish to place the numbers 00 to 99 (decimal) in a 10 by 10 square such that all the most significant parts (00-90) and all the least significant parts (00-90) appear only once in each column and row.

If Euler's conjecture were true it would take a modern computer about one year to run through all the possibilities! In 1957 the SWAC computer (see reference) was given the problem, but had failed to obtain an answer after 100 hours (1 hour nowadays). The conjecture was eventually proved false for all cases of K > 1 by three mathematicians, Parker, Bose, and Shrikhande. They managed to devise rules which generate squares of orders 10, 14, 18, 22. An example of a 10 × 10 square is given on the cover of Scientific American, November 1959.

The moral of this story would seem to be that computers, sometimes
called *number crunchers*, can get indigestion. However, it is interesting to note
that if we apply the restriction to the diagonals it is impossible to produce even one of
the arrays A, B for the case of N = 6. So the question, still unanswered, is whether there
exists a 10 by 10 square with all the restrictions? Treated as a magic square the rows, columns,
and diagonals must sum to 505.

#### 6.5 THE BUSY BEAVER LOGICAL GAME

Alan M. Turing was one of the first to write about playing games with computers (18). He is,
however, better remembered for his contributions to the foundations of mathematics.
In particular his paper ,em>On Computable Numbers, with an Application
to the *Entscheidungs problem* is a must for any student of logic, because it shows that
there are classes of mathematical problems that cannot be solved by *any fixed and definite
process*.

The paper clarified the concept of such processes and described
a *universal* machine (the Turing machine) which *when supplied with suitable
instructions, would imitate the behaviour of any other*. Turing was able to
show that his machine had limitations;
this means that given an infinitely fast, infinitely large computer it is stilI
not possible to solve all problems. In another paper, *Computing Machinery and
Intelligence*, Turing examined the general question of the limits of machines and whether
it can be truly said that they could possess *intelligence*. His answer was that it is
impossible to define intelligence because, for any definition, it is possible to conceive a
machine which will have those attributes; therefore machines could 'behave intelligently'.
Having thus relegated the human race to a class of Turing machines, just what do these machines
look like and what problems are they (and therefore the human race) unable to answer?

##### A Turing machine

An example of such a machine is an infinitely long magnetic tape which has a pattern of binary digits (0 and 1) written on it, plus a read/write head (somewhere on the tape) which is in one of N possible states. The machine

- reads what is under the head and then (depending on which state it is in)
- writes either a 0 or 1 to the tape
- moves the tape left or right one place
- changes state and loops back to (1) or stops.

According to the paper by Shen Lin and Tibor Rado (37), a Turing machine can be thought
of as a program (set of instructions) spelled out in fixed format. The instructions
are written on a finite number of *cards* which represent the number of states the
machine may assume. It is theoretically possible to show that such a machine, given sufficient
states, can simulate any human behaviour despite its apparently limited number of operations.

For a simple example

CARD 1 | CARD 2 | CARD 3 | |||||||
---|---|---|---|---|---|---|---|---|---|

WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | |

0 | 1 | R | 2 | 1 | L | 3 | 1 | L | 2 |

1 | 1 | R | 3 | 1 | R | 2 | 1 | R | 0 |

is a 3-card Turing machine. Depending on which state the machine
is in (1, 2 or 3), and what is under the reading head (0 or 1, shown in the left hand column),
a short program is accessed which will WRITE to the tape, MOVE left or right (L or R), and
CHANGE to another card or state. A change to state 0 means *stop*.

If the machine is started from state 1 on an infinite blank tape it will do the following:

##### Infinite Blank Tape

The machine has stopped after writing two l's to the tape. If we now replace CARD 1 and CARD 2 with

CARD 1 | CARD 2 | CARD 3 | |||||||
---|---|---|---|---|---|---|---|---|---|

WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | |

0 | 1 | R | 2 | 1 | L | 1 | 1 | L | 2 |

1 | 1 | R | 2 | 1 | R | 1 | 1 | R | 0 |

and again start from CARD 1 on a blank tape, we see that

- the program can never stop because it can never reach state 0 on CARD 3, and
- it will actually write an infinite number of 1's to the tape.

The game of the Busy Beaver is to design three cards which will either

- Write the maximum number of 1's to the tape and stop. (3MAX1), or
- perform the most operations and stop. (3MAXOP).

The critical problem is to discover whether a program will stop or not. What Turing said,
in his classic paper, was that it is not possible to write a program (super Turing machine)
to decide whether all programs will stop or not because (roughly), if we think we have, then
how do we know that the super Turing machine will stop, and so on *ad infinitum*?

To show the immensity of the problem. Each card for the Busy Beaver game can be designed in

WRITE MOVE CHANGE 2 2 4 2 2 4 = 256 ways for 3 states

The total of 3-card machines is, therefore (16^{2})^{3} = about 17 million!

The answers for 3MAX1 and 3MAXOP had been conjectured to be 6 and 21 respectively. The paper by Lin and Rado describes how the enormous number of possible designs was analysed, broken down, tested, and finally verified the conjecture. An example machine to write the 6 maximum 1's to the tape is given in the paper. It is

CARD 1 | CARD 2 | CARD 3 | |||||||
---|---|---|---|---|---|---|---|---|---|

WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | |

0 | 1 | R | 2 | 1 | R | 3 | 1 | L | 1 |

1 | 1 | L | 3 | 1 | R | STOP | 0 | L | 2 |

which operates as follows

##### Turing Machine to Write 111111

taking 11 operations.

An example machine to perform the of operations is

CARD 1 | CARD 2 | CARD 3 | |||||||
---|---|---|---|---|---|---|---|---|---|

WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | WRITE | MOVE | CHANGE | |

0 | 1 | R | 2 | 1 | L | 2 | 1 | L | 3 |

1 | 1 | R | STOP | 0 | R | 3 | 1 | L | 1 |

If the Busy Beaver game is played with more than 3 cards then the maximum possible number of l's and operations rises accordingly. Programs for N ≥ 8 have been designed by M. W. Green (38) which achieve scores of > 1000, and it is possible to conjecture that his best answers are indeed the maximum. However the problem of actually proving this for any given integer N is so enormous as to be practically non-computable.