Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript □ 10. Position papers □ 10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDLiteratureBooksMethodology of Interaction
ACDLiteratureBooksMethodology of Interaction
ACL ACD C&A INF CCD CISD Archives
Further reading

ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript
10. Position papers
10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography

6. PRESENTATIONS

6. PRESENTATIONS

Menu

DEJEUNER

* * *

TERINE MAISON

* * *

FAUX FILET MAITRE d'HOTEL

POMMES ALLUMETTES

* * *

SALADE

* * *

PLATEAU de FROMAGES

* * *

CHOUX au KIRSCH

* * *

Seillac, le 8 Mai 1979

Domaine de Seillac

41150 Onzain

Drive them, kick them,
Make them work;
Take away the pat-cakes
From those who shirk.

Groan, groan, moan, and sigh.

Humour them, coax them,
But make them slave;
Drive them to toil
Till they fall in the grave.

Groan, moan, sigh, good-by.

-Gummy, Scribbles
(Collected Works)

6.1 Introduction

After the initial presentations, a large part of the second day of the Workshop was spent in trying to focus on the main issues to be discussed in depth in smaller groups. Towards this end, presentations were made by Bob Dunn, Jim Foley, William Newman, and Martin Newell. These presentations and an edited version of the resulting discussion are given here as they highlight not only those areas that were considered worthy of discussion but also those areas that were considered and rejected for one reason or another.

6.2 A CONTROL STRUCTURE MODEL FOR INTERACTION - R.M.DUNN

From out of the Then
And into the Now,
From the There to the Here
Come the Why and the How.

-Gummy, Scribbles
(Collected Works)

The goal has been to gain as wide a perspective as possible. To that end, one component of the model for interaction is used, in some recursive sense as a guide to the whole. Specifically, interaction is seen as an intentional system (Fig 1). The two primary intentions are to:

  1. be responsive at the interface to the two (or more) parties (man and machine) to the interaction and
  2. trigger behaviour within each party appropriate to chosen (assigned) roles.

INTERACTION Interface Behaviour

Figure 1

From this view, the man brings to interaction a set of expectations and requirements for which fulfilment is sought within the machine. The task at the interface is to effect a sense of control in the feedback process between the parties. The machine is invested with the designer's interpretation of what the user seeks and knowledge (capability) to perform certain functions. Communication across the interface has the goal of achieving agreement (congruence) between the parties as to respective behaviours to be invoked and maintaining the mutual activities that result within the agreed-to behavioural/discourse domain (equilibrium).

Within the machine, each behaviour is modelled as a goal-oriented activity realised by a set of spanning (basis) sets of functions (tasks) that are (maybe) organised in some partial order. Behaviour within the machine is activated by a traverse of this lattice of tasks in some direction.

The next aspect in the model of interaction concerns a notion of connotation. There are two system-level connotative issues: first, the collection of reference terms, reference concepts, metaphors, representations, etc that are appropriate to the object of behaviour; second, the mode of behaviour relative to interaction:

                                     Role
                         ACTIVE                  PASSIVE
            POSITIVE       A                        B
Attitude
            NEGATIVE       C                        
            D
A:	CONSTRUCTIVE                    ASSERTION/QUESTION 
B:	DESTRUCTIVE/CRITICAL            ASSERTION/QUESTION 
C:	CONSTRUCTIVE                    DEFENCE/EXPLANATION
D:	DESTRUCTIVE/CRITICAL            DEFENCE/EXPLANATION

Both partners may have an active or passive role and a positive or negative attitude. Here the concern is to acknowledge that changes of initiative occur in interaction. Furthermore, each change in initiative may be accompanied by a change in role and/or attitude to the process of controlling the interaction. Shifts can occur from asserting and questioning as means of control to explaining as means of control. In another direction, shifts can occur from encouraging further discourse in a direction to discouraging the direction and vice versa.

Each task in a behaviour's lattice, also requires an aspect of denotative control (Fig 2). In the traverse of the lattice, a decision must be affected as to whether or not the task will (may) be executed or whether there is to be an alternative task(s) to be invoked. In fact, the decision to invoke an alternative can be a link into some point of another lattice (behaviour).

* Do Don't USEFUL FUNCTIONS OF SYSTEM - edit - define - point to - display - label - link - ..... INSTEAD substitute simulate put off (defer) expand contract evaluate TASK

Figure 2

Tasks are modelled (Fig 3) as a function applied at some level (Fig 4) of discourse to some process in a spectrum (Fig 5) that ranges from perceiving the need for the function to initiating its execution.

The underlying concept is that either a variant of the function exists for each relevant point in the space of the task model, or a function is used in several ways where each way corresponds to a point in the space of the task model.

Intention Connotation Denotation Rules/structure Constituents/primitives Perceive Recognise Symbolise Value Act DEDUCTION , INDUCTION , Static specification STRUCTURE

Figure 3

GOAL / SPECIALITY / NEED / PURPOSE / METHODS CONTEXT / CONSTRAINTS / LIMITS / CONDITIONS / ENVIRONMENT INFORMATION / CONTENT / SUBSTANCE / DATA COMPOSITION / FORMS / CONCATENATE/ TRANSFORMATIONS PRIMITIVES / LEXICON / .. INTENTION CONNOTATION DENOTATION RULES CONSTITUENTS WHY WHAT HOW THE SIGNIFICANCE AT THE INTERFACE LEVEL

Figure 4

STAGES OF COGNITION FROM AWARENESS TO CONSEQUENCE ACT on the necessary VALUE the appropriate SYBOLISE the meaningful RECOGNISE the significant PERCEIVE the familiar : BEHAVIOUR : PHILOSOPHY : STYLE : TRAINING : EXPERIENCE PROCESS

Figure 5

6.3 DISCUSSION

Bono:
Where does the user's model fit into your framework, is it distributed throughout your framework?
Dunn:
I view systems as a set of behaviours with wiring The user's model lies in the interconnections. between them.
Guedj:
The word model is being used in more than one way. On the one hand there is the model the user has of the system's behaviour, on the other there is the view the system has of the user's behaviour.
On the man rather than machine side, there are the designer, the user and the sponsor. I suspect that each should have the same view or at least views which are coherent.
Dunn:
The views are not the same but must be coherent. The sponsor has an expectation of what the system will do in the hands of the user. The designer has an intuition of what can be done by anyone. The user has expectations of what the system will do. In general they are not the same.
Foley:
The user and designer must use the same model. The sponsor may use a different model.
Herzog:
The user and sponsor have the same point of view. I would like to see the designer avail himself of the user's point of view.
Engelman:
Can the designer and user have the same model? The problems faced by the two are different. People learn how to use systems slowly, taking simple things first, the rest of the system being a cloud. At any moment, many models exist over the set of users. The problem facing the designer is how to define a system that can be used efficiently at a variety of levels.
Foley:
The model should be capable of being broken into subsets in a variety of ways.
Kay:
There are two styles of teaching, each valid in a certain context. The first is to analyse the object into parts and then teach the parts. Sometimes you lose by doing this and then one must look for the simplest way in which to teach the whole. Piano playing is a good example. Teaching one hand then the other is not a good method. There are no rules to say which method to use where.
Hayes:
There is some literature on this topic, though the results are complex. The length of the task and the intelligence of the subject are probably important. The knowledge the user already has might also be important.
Herzog:
If it is a new area, no users currently exist. Users in general only know the way they are doing things at the moment. Seeing how books are put together manually is not helpful to designing a computer system for printing.
Bo:
Where is the interface between the user and the system?
Guedj:
The model is in both the user's mind and the designer's mind. I assert that systems that transfer well are such that the user's model is close to the designer's model. These are guidelines on how to improve a design but there is no general framework for design.
Encarnacao:
It is important that dialogues are constructed so that they can adapt to a changing user population.
Krammer:
What are we trying to specify and understand? Both man and machine talk about a part of the world. Is it a model of this or can I ask something of the machine. I do not understand the formalism. The designer's model and user's model can never be the same. The machine and the designer may know more about the system. One class of user may only know a part of the system but the designer and machine must know all. The designer in his own special field may know more than the machine. There is overlap but only partial.

6.4 METHODOLOGY OF INTERACTION - J.D.FOLEY

6.4.1 Introduction

What is a methodology? It is, as a working approximation, a process or a procedure or a conceptual model for understanding and/or designing. Interaction comprises:

  1. An Input Language
  2. An Output Language
  3. A Communication Protocol, with prompts, feedback, and sequencing rules

All these components are tied together by a user's conceptual model. Three of the position papers and Chapter 28 of Newman and Sproull seem to be expressing similar frameworks:

DUNN             FOLEY             MORAN             NEWMAN
Intention                          Task
Connotation
                Conceptual
Denotation      Semantic           Semantic          User's Model
Rules           Syntactic          Syntactic
                                                     Command Language
                                                     Information Display
Constituents    Lexical            Interaction

The real design decisions are at the lower levels, where there are three common themes, semantic, syntactic and lexical. These terms are extrapolations of classical language themes into the graphical domain.

The design process seems to operate in a top-down, iterative fashion over the following levels:

  1. Conceptual
  2. Semantic
  3. Syntactic
  4. Lexical

At each level there are important, but quite different, considerations and decisions. However, human factors and psychology can help at each level.

6.4.2 Conceptual Model

The user's conceptual model is the set of basic concepts the user must understand to use the system. Examples are:

  1. String versus Line-Oriented Editors.
  2. Hierarchical versus Network versus Relational DBMS
  3. Procedural versus Non-Procedural Programming Language

6.4.3 Semantic Level

The semantic level is divided into an input and an output side. The input side contains the specific operations on the conceptual model, and their effects upon the model. The output side contains the particular information presentation techniques, such as the choice between bar charts, pie charts, tables, maps and wire-frame or hidden surface presentations.

6.4.4 Syntactic Level

The syntactic level is also divided into an input side and an output side. The input side contains the sequences of tokens (actions) required to specify the semantic actions. The output side contains the particular screen layouts containing the problem information and the prompts, menus and error messages containing the control information.

6.4.5 Lexical Level

On the input side, the lexical level contains the groupings of lexemes into syntactic tokens. These lexemes are the basic hardware units, such as pen hits, keystrokes, knob positions and phonemes (for speech input).

On the output side, the lexical level contains the information encodings in terms of hardware units such as colour, intensity, linestyles, fonts and phonemes (for speech output).

Finally, an aside which is not for discussion at present - device abstractions belong at the syntactic level; the application programmer should be able to program the syntactic to lexical interface (i.e. the bindings of sequences of lexemes to syntactic tokens).

6.5 DISCUSSION

Bo:
Are we able to agree on the levels and what do we gain by this subdivision?
Foley:
I am not particularly worried what goes on each level. More important, is there a level structure and is that useful in guiding our thoughts?
ten Hagen:
These four levels describe how a designer behaves or should behave. It allows him to break his large system into a number of parts. Syntax allows him to define the system in a number of smaller parts, but each part may have the full set of four levels. The definition is recursive. There is one source of confusion. What one person calls syntax, another calls semantics because he is thinking about different aspects of the same subject. I would consider Alan Shaw's paper semantics but you would call it syntax. He pins down a specific semantic issue and then tries to describe it using syntax.
Foley:
The SDMS has no semantics in it as far as the machine is concerned but there is a lot in the user's mind in the way he moves around Dataland.
van den Bos:
We were promised a methodology of interaction; the breakdown into level applies equally well to batch. There is a much closer link between input and output than you suggest in your model.
Foley:
There must be feedback at each level - character echo at the lexical level, blinking a menu item that has been hit at the syntactic level, etc.
Hayes:
I have been listening to the discussion and get a strange feeling in my psychologists soul! Interaction is proposed as just consisting of listing inputs followed by outputs. To consider interaction at this lowest level will not be enough.
Kay:
This layered model resembles the model you have for higher level languages like PASCAL. However, PASCAL is not very good in a modelling situation - it is good at dealing with things that have state.
If we look at exotic computer graphics, there are stronger models that we can get analogies from. Consider the viewing model in 3D graphics where there is an observer and an observed and filtering rules. In the Evans and Sutherland system, the filtering rules are just to do with viewing the object given the relative positions of the observer and observed.
If you consider these two as being in their own coordinate systems, you can extend the notion of coordinate system to include the properties of the two elements. The viewing process is then an information retrieval operation for the things we specify from the cockpit of our aeroplane in information space.
Using this metaphor, I would claim that we would hate to have the controls of the plane change as we go from air to water (turn into submarine controls!). We do not want to have to learn to manoeuvre 50 different kinds of vehicles.
Whatever domain is involved, we need to normalise its properties as much as possible so that any idiosyncratic properties are routed to a small place on the controls.
This filtering model that I am proposing is a stronger model than the hierarchical one. Flying a plane through information space has much more to do with the actual things we are trying to observe. It can handle all the situations of conventional computer graphics and some things that conventional computer graphics finds quite difficult.
Baecker:
We could look at:
  1. How to describe interaction formally
  2. How do we build better interactive systems
    1. What is good interaction
    2. How do we measure it
Alan Kay's and Nick Negroponte's work challenges us to build more versatile systems. Our current descriptive techniques are not adequate to describe all the things we can do.
Krammer:
Accepting Foley's categorisation, some areas are difficult to categorise:
  • Ergonomics - is that lexical?
  • Psychology - is that syntax or semantics?
  • Creativity - is that with the conceptual model?
Foley:
Creativity goes across the board; there is a particular kind of creativity that goes with new conceptual models. Traditional ergonomics is at the lexical level so is perceptual psychology. Cognitive psychology is more at the semantics level.
van Dam:
The syntactic function is to present semantics. We get a lot of fall out from the language model and we must push this at the same time as we push the filter model. However, the tight coupling and interactiveness in the Negroponte/Kay sense is a different kind of phenomena to the type of interaction that is more a one-at-a-time batch activity. The richer type of interaction may require a more expressive model than the simple language paradigm.
Engelman:
We have been borrowing from psychology and also linguistics to describe how we produce or understand an utterance. There are also tools in psychology for describing interaction. They deal with exchange of initiative in conversation, change of focus, and how you recap back into previous states. It is not as mathematical as syntax but there is a great deal that does exist as tools in terms of rules for describing states and transitions between states.

6.6 PROPOSAL FOR WORKING GROUPS - W.NEWMAN

Perhaps we are trying to discuss a problem that is too difficult at the moment. The development of a general methodology for interaction is perhaps not a problem that we should tackle head on. Could I digress and tell you a story:

In the early days of hidden surface removal some people at Utah were trying to solve the problem by doing a massive sorting operation on about 200,000 polygons, in real time. They couldn't find the way to do it, and went around asking various people for ideas. One of the people they asked was a systems programmer in the Computer Centre, who took one look at the problem and said, It can't be done, you'll have to sub-divide the problem. No-one took any notice, so eventually the systems programmer went away and developed it into the well-known algorithm that bears his name.

The moral of the story is why do we not try to subdivide our difficult problem? I want to put up a strawman list of ways in which we can make progress. Let us identify some issues where we can make some progress. Let us recapture the spirit of Seillac I (and that was not what went on in the bar!}.

At Seillac-I, a bunch of people discussed the topic of Methodology of Computer Graphic Systems. They traded pleasantries (and insults) and eventually something interesting happened and people began to ask pointed questions. People were arguing about issues that were well understood (supposedly!). Similarly, I think we should try and disagree on topics that we have some common understanding about rather than agree about areas where we have little understanding. Perhaps we can raise some issues that we may be able to get some common understanding on.

Concluding the encapsulation of Seillac-I, we made great strides there in the Methodology of Graphics System design. The danger is that we may take it too seriously. If we look at Seillac I, they saw one outstanding problem in graphics (see Fig. 1).

We became too concerned about climbing the one peak because it was there. Some of us came away from Seillac I and worked on the Methodology of Graphics Systems while others worked on the CORE. The result was that the ones who worked on the CORE, because they proceeded more quickly, over-shadowed what was being done on the Methodology. We must try and avoid this problem here.

1973 1971 THE STANDARD 1963 1966 1968

Figure 1

Let us look at the situation as far as interaction is concerned (see Fig 2).

1962 1971

Figure 2

Notice that some of the peaks in this case do not have dates on them indicating that they have not been climbed yet. Also, they are all about the same size. We should look around for ones that we can climb.

Let us raise the excitement level again and consider the following:

If we define a set of systems P and a set of users for each system then we can define the set of users for all the systems. I would estimate that this number of users, in a few years, would be of the order of 100,000,000. How do we describe user interfaces for all these people?

If we look at the number of applications and sum over all the systems for each application then this defines the total number of products. This is a much smaller universe.

Let us look at the complications of the large user base. Some products will have many users. Traditional design methods will not work. No longer can you have users coming to you and asking can you help me design this and then iterating until it is usable. We are now in the area of marketing. With a large user base, we will have to separate marketing from design and have a methodology to bind these groups together. We need mechanisms for specifying requirements and designing systems to meet requirements. One of the complications of a large user community is that the time spent in design will be insignificant compared with the time spent in user support, training. There will also be problems in respect of reliability etc. It will now be sensible to spend a great deal of time in the development of the product - will need to emphasise the ability to analyse task and to develop to a high level the ability to design user interfaces and evaluate them.

How does this influence what we should talk about? Many people have already tried to define what we mean by methodology of interaction. Do we need to specify it precisely? Perhaps we can agree on a set of strategies and codes of practice. It will help the person to design a user interface. I propose we break into smaller groups and discuss the following topics:

  1. Can we develop formal descriptive techniques for user interfaces?
  2. Can we successfully evaluate user interfaces?
  3. Can we achieve portability and/or device independence in user interfaces?
  4. Can we relate our programming environments to the problem of user interface design (only a few users for some products and so could the system be tailored to their own use)?
  5. Can we develop a methodology for user interface design?

6.7 DISCUSSION

van Dam:
What do you mean by user interface? Is it a dialogue, man-machine interaction..., etc. or is it something more specific?
Newman:
I mean the interface between the user and the computer system at all levels including interaction.
Negroponte:
William (Newman) has sidestepped the issue. User interfaces are channels of communication whereas interactions are intentions, purposes etc. We do not have interactive systems, SDMS is not interactive, only reactive - giving responses to user requests. Interactive systems are far too difficult to discuss. User interfaces we can talk about.
Sancha:
Interactive means to respond to actions both ways. Your definition of interaction is far too restrictive.
Kay:
The idea of change of state and returning to the original state are totally absent from SMALLTALK and SDMS. Interaction refers to a lot of things between humans and we should not apply it loosely to computer systems that do not even come close to that. SMALLTALK is just a reactive system and has none of the properties that I would associate with human interaction.

6.8 PROGRAMMING ENVIRONMENT - M.E.NEWELL

The difficulty before us at this time is to focus on some topics that we can get our teeth into. We must identify some specific topics for which we can measure our progress. We can then generalise from these later.

I came to this workshop to learn about man machine interaction.

I would offer you the following scenario.

Someone walks into your office who has done some machine code programming, knows a few algorithms and wants help with a new application. The question is, What do you say to him? I would suggest that there are several things that you would wish to tell him. Firstly one would tell him to implement a high level language on his machine. When he had done this one would talk to him about topics such as structured programming, top-down design etc.

Suppose that after gaining some experience in writing batch programs in a high level language he comes back and says he wants to write an interactive program. He knows who the end users are. What does one tell him at this time? This is a concrete question that we could address.

I would like to make a short digression.

There is a school of thought that says that databases are stupid programming languages in the sense that the designers of database systems present you with a set of primitives on which you can build operations, actions etc. However, there are limitations in this - the power of expression is limited and somebody will hit them at some instance. Somebody will always come back asking for some new facility. The conclusion is that Databases are considered harmful as implemented. Information bases should be at least as strong as a Turing Machine, in other words should be a programming system.

I would argue that the same considerations apply to interactive interfaces. Most interactive interfaces are stupid programming languages. People want conditional statements, repeat statements, etc. Command languages should therefore be considered harmful. Interactive systems should be programming systems.

Please note however that this is not the same thing as saying that one can learn from programming languages (semantics, syntactic, lexical categories, etc.) to understand interactive interfaces. I am saying something far more concrete than that.

Yesterday Alan Kay mentioned that he had written an application in six pages of SMALLTALK that would have taken fifty pages of Algol 60. I ask the question why is there this discrepancy? I want to suggest that it is because SMALLTALK has two essential features:

  1. SMALLTALK is an object oriented language.
  2. SMALLTALK provides a good programming environment.

I would suggest the power of SMALLTALK is due to factor (2) not factor (1). SMALLTALK has no essential greater expressive power than ALGOL 60 has. Other languages could be equally powerful if cast in a similar programming environment.

The conventional environment is as follows:

user user program compiler and loader Editor support system

The only items the user has access to is the user program. Only the implementor has access to the other parts of the system. The view provided by SMALLTALK is something akin to:

user user program compiler and loader Editor support system

There is no wall built around the rest of the system. In SMALLTALK the user has access to all components of the system all the time. If you take the view that an interactive dialogue is indeed an example of a programming language, the tools that you provide to help a user write programs (editors, compilers, etc) should also be valuable and available to the interactive end user of the system. When a user starts to program in SMALLTALK he is already in an interactive environment and can specify those features in his problem that are incrementally different from the SMALLTALK environment. Thus the answer I would give to the question posed above would be:

  1. Build an interactive programming environment
  2. Then return and we will talk about X.

One is then led to ask the question, What is a programming environment? There are several properties one can list:

  1. Incremental program execution;
  2. Integrated editors which include bit map editors, picture editors, etc. as well as conventional text editors;
  3. Integrated filing system - one would like to be able to save the virtual memory from session to session for example. Present filing systems are not powerful enough;
  4. Integrated, open, support packages - in moat conventional systems one cannot get at the support package. The user should be able to modify this to suit his requirements and use his own version in place of the system version;
  5. Integrated debugging.

My feeling is that a subgroup could profitably look into the requirements for a good interactive programming environment.

6.9 DISCUSSION

Kay:
In support of what Martin has said, interactive programming environments have been provided in other contexts, see, for example, the book by Lamson Interactive Machine Language Programming which described an integrated editor and debugger for a machine code environment. Sutherland's Sketchpad thesis could also be cited. One of the interesting things about Sketchpad was the way it was implemented. Sketchpad used the notions of class and instance and everything important was generic. Every item had associated with it a procedure called DISPLAY which knew how to display that kind of object. One reason why the programs are smaller in SMALLTALK is because it is simulation-style programming where the data structures and procedures that do things are integrated.
A reason why programs are small is due to the active analogy of one bit of code to another. A fancy interactive window system is just an add-on to the basic window system. Children tend to program naturally by this kind of differential programming. They have to think out programs from scratch. Experts also use this method, in fact 95% of SMALLTALK is generic.
Dunn:
There is a cardinal assertion in this that to have a interactive system, you need a good programming environment.
Newman:
I am quite willing to disagree with Martin on the need for a programming environment. There are specific cases where this is not necessary. The MARKUP program has been in use at XEROX for four years with over 100 users and no one has complained that it needs to be set in a programming environment, the application is so simple.
The other extreme is something like an office system where one cannot define a simple language that would adequately express the kind of objects and constructs that you are dealing with, the application is so complex. One could not let the office workers loose on programming the environment that he is working in!
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site