Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript □ 10. Position papers □ 10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDLiteratureBooksMethodology of Interaction
ACDLiteratureBooksMethodology of Interaction
ACL ACD C&A INF CCD CISD Archives
Further reading

ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript
10. Position papers
10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography

8. GROUP REPORTS

Menu

DINER

* * *

QUICHE LORRAINE

* * *

LANGUE de BOEUF B0LOGNAISE

SPAGHETTIS à L'ITALIENNE

* * *

PLATEAU de FROMAGES

* * *

PRINTANIER au KIRCH

* * *

Seillac, le 8 Mai 1979

Domaine de Seillac

41150 Onzain

8.1 DEFINITION OF INTERACTION

When you say what you think, be
sure to think what you say.

-Muggles, Maxims

8.1.1 Introduction

The participants of the workshop had a disagreement over the question of what systems may properly be called interactive while some systems were called but reactive. The group reached a consensus that:

INTERACTION IS A STYLE OF CONTROL
and
INTERACTIVE SYSTEMS EXHIBIT THAT STYLE

Following this, the group turned their attention to finding the desirable properties of this style and finding differentiable modes of interaction.

8.1.2 Approach

The question arises as to what are the desired properties of truly interactive systems? The list below contains some of the properties believed to be important:

  1. There must be a priori general understanding of the application domain common to both man and machine.
  2. There must be closure in the common understanding of the current context and goal.
  3. This closure must be maintained as the discourse proceeds.
  4. The responsibility for achieving goals must be shared between man and machine.
  5. Initiative should be balanced between man and machine.
  6. The system should be adaptive in a variety of senses:
    1. To the user, i.e. the system should understand who the man is, maintaining a model of his expertise.
    2. To the current context i.e. its style of response should evolve as the Interaction progresses.
    3. To the available input and output devices and communications.
    4. The system will, in some cases, also learn about the application domain.
  7. The system must have the capability to explain it's assertions, for example explain how it has arrived at some plan and be able to defend this.
  8. The system should have the ability to transfer gracefully between query, explanation and knowledge acquisition modes.
  9. Surprise in an interaction is stimulating.

The group then looked at modularity and interactive systems. An interactive system may have a predominant mode or may consist of modules each with a different mode. If the system does consist of modules in different modes, then the boundaries between them must be clear to the user. Also, it is necessary to define a scenario of how modules are brought into play, including the necessary communications channels and control paths.

We have listed a number of principle modes for systems. This list is, of course, partial.

  1. Enquiring
  2. Exploration of level
  3. Directed task execution
  4. Data input
  5. Co-operative design
  6. Editing
  7. Programming
  8. Instruction
  9. Explanation of inference

We expect that:

  1. The properties of different modes will be different.
  2. Techniques for different mode definitions may vary.
  3. The characteristics of the communications channel may affect the techniques used in defining a particular mode of interaction.

There are a number of techniques we feel might be helpful:

  1. Knowledge bases: These could be used in design, planning, and in the application domain. Also, they could improve communication by allowing approximations to natural languages in the interaction and by allowing smart graphics.
  2. Models: One might draw from linguistics models, particularly from pragmatics.
  3. Smart control structures: We should focus on how these are achievable.
  4. Programming Environments: It is clear that an excellent programming environment is required. One might point to very high level languages, (for example, KRL), and integrated programming systems, (for example, INTERLISP).

8.1.3 Discussion

The search for a formal definition of the term interactive system was given up at a very early stage. The gap between the meaning of the words interaction and interactive was recognised. Some representative examples collected during discussion:

  1. Chess by mail.
  2. Putting a batch program into a very fast computer with on-line user peripherals.
  3. Mutual initiative systems.
  4. Knowledge based consultant.

8.1.4 Suggestions for Future Work

  1. Compile a comprehensive list of desirable properties of interaction
  2. Structure these according to a conceptual framework
  3. Find the typical modes of interaction
  4. Relate the modes to the framework, for example by finding the place of modules exhibiting these modes
  5. Relate modes to properties
  6. Find a way of characterising interactive systems according to properties and modes

8.2 DISCUSSION

Kay:
Your point about surprise being stimulating reminds me of Paul Hindemith's book Composer's World. When listening to music you are co-creating he argues. You are hearing, remembering and also looking ahead. If you predict everything correctly the music seems mundane; if you can predict nothing it becomes frustrating. Good music must have a certain unpredictable but consequential element.
Newman:
I like the notion of graceful movement between modes of interaction. Moving from query mode to the next level and back exactly categorises how people talk. This has been demonstrated by one of my PhD students, Eleanor Winn.
Kay:
The classic system is SCHOLAR (working in the education domain). This system extracts modes from a database.

Menu

DEJEUNER

* * *

COCKTAIL MOSCOVITE

* * *

CARRE de VEAU NORMANDE

VELOUTE d'EPINARDS

* * *

PLATEAU de FROMAGES

* * *

MARRONIRE au RHUM

* * *

Seillac, le 9 Mai 1979

Domaine de Seillac

41150 Onzain

8.3 MODELS OF INTERACTION

8.3.1 Introduction

The goal was to work on a model for interaction in order to be able to develop guidelines for a design methodology of interactive systems. The group started by studying and discussing the several models presented as position and represented by members of the group. The authors of these models defined them as follows:

  1. INTERACTION CONTROL MODEL (C.Tozzi): This is a model of the control feedback between man and machine in a dialogue system. The model takes disturbances of the dialogue and the learning capabilities of the user into consideration.
  2. COGNITIVE MODEL (W.Dzida): The cognitive executive system is divided into two parts, the superordinate system which is concerned with making a plan and the subordinate system which is concerned with automatically performing the plan. By the separation of planning from doing, flexible man-computer dialogues are possible.
  3. KNOWLEDGE MODEL (S.Ohsuga): The knowledge system includes a knowledge base, a data base and a procedure base as the basic information bases and a set of high level procedure to use knowledge, data and procedures in these bases. These procedures include an algorithm to translate the external form of information (such as language, graphic information etc) to the representation of knowledge. The procedures also include deductive inference algorithm and a program generation algorithm. With such facilities, the system can support the user's problem solving task.
  4. FILTER MODEL (A.Kay): An extensional analogy to the multiple viewing of complex simulations in Evans and Sutherland 3D graphics, the extension includes a filter consisting of a window end (on the simulations), a viewport end (at the observer) and ways to selectively choose scenes from the simulation.
  5. LANGUAGE MODEL (J.Foley): The input and output of an interactive system are modelled as languages. Both input and output have semantic, syntactic, and lexical components, all integrated together by the user's conceptual model of the system.

A synthesis of the most important features (capabilities) of all these models into one single model was proposed. The further goal was to design such a model and to see how it fits into approaches discussed during the workshop (levels, parallelism, concurrency, etc). Input of bulk data, data manipulation, and output of results were classified as different stages of an interaction. The system interface was divided into fixed, adaptive and tailored.

8.3.2 Principles and Properties

Several principles and properties were used to construct the model and underline its form and intended interpretation. These principles and properties are given below:

  1. The system is composed of a user and a machine working together - thus an interface. The user of the system has some idea (conceptual model) of how the computer system can be used by him to perform his specific task using the input/output devices. The system designer has some idea of the user's behaviour which he takes into account.
  2. Every object and process in the system (including the system itself) can be and is described using a concept of linguistic levels (four as per Foley if the entity is single purpose, five as per Dunn if the entity is multiple purpose).
  3. When what is to be done is known or the object is identified then semantics, syntax and lexemes combine to describe how the process is to be achieved or to describe how the object is formed. In this sense, a process is also an object (Kay).
  4. When the what is as yet unresolved, semantics, connotation and intention combine to identify what - either as an object or a process.
  5. Control is a special process (therefore, it can have a form as above). Interaction is a style of control (therefore, it too can have a form as above - Dunn).
  6. An interacting system is a cooperative relationship between man and machine. It is a multilevel system where the man and the machine must achieve an equilibrium on every level.
  7. For any state of the system, agreement (congruence) at some level (linguistic) must be reached (closure) and maintained (equilibrium) in order for the system to be successful and satisfying (Dunn).
  8. Feedback (Tozzi) is used to, synchronously or not, shift the awareness of the two parties in the system from achieving agreement (for example, supervisory control) to carrying out a work task (task performance) or vice versa (Dzida).
  9. Learning in the user (Dzida), Inference in the machine (Ohsuga) and Filtering the machine (Kay) are used to reach agreement (Ohsuga) on the knowledge, data, forms, functions and procedures that form the common domain for the current work to be done.

8.3.3 Relation between Principles and Properties

Every object or process in the system must show the levels that are involved. All communication paths must be shown reflecting a sensitivity to the level emanating from and the level connected to. There is a distillation of the conceptual model that reflects the principle properties and their relation in an operational sense. First, consider:

Human Computer

This says that we consider the system as two equal partners, and this may allow us to apply cybernetics models for the Man and apply human concepts (for example, creativity) for the Machine. The two parties can be represented as:

A B O I I O B A DOMAIN LI LO

Conceptually, there exists a DOMAIN of application. The language of communication consists of two parts, LI and LO, the I and 0 parts of the partners. The two parts A and B represent knowing and doing.

The B part consists of data, procedures, and functions, etc., while the A part consists of operations on these, including operations on functions. The I and 0 parts can be further divided as follows:

LI LO Lexical Syntactic Semantic Lexical Syntactic Semantic Lexical feedback Syntactic feedback

If the input gives some feedback, this can be achieved by the input placing information in the queue for the output processor. Other processes in the system may ask the output process to display information. It is possible that some advanced new input tool may somehow enter immediately at the semantic level. If we look in more detail at the A and B parts:

AH BH AC BC

All the details of the processing are cleared at the lower level. Goals and intentions are agreed upon at the higher level. If the two A parts agree on something to be performed, they instruct the B parts to perform the task who reply to the A parts when they have completed it. The B part may decide an error has occurred. It will then instruct its A part that something is wrong. A conversation will then take place between the two A parts to decide on what to do. The model opens the way to develop from very simple interactions to highly intelligent, human-like ones. Similarly, it may be considered as a model both for strictly sequential participants and well coordinated parallel individuals.

The basic model can, therefore, be thought of as:

System planning Task performance System control Task performance DOMAIN INTERFACE HUMAN COMPUTER WHAT (control level) HOW (performance level)

This model helps to show where the interaction is performed, how to build more and less intelligent systems, and what is happening when logical errors occur.

8.3.4 The Interaction Model

The Interaction Model is best discussed from the interaction process point of view as follows:

  1. A user interacting with the system (say via a Display) inputs a command(s).
  2. The command is processed lexically, syntactically and semantically and then appropriately routed to the inference processor, task performer or the filter.
  3. With input from the command processor and the structure of the task the inference processor essentially builds up the knowledge base.
  4. The task performer analyses the input commands, decides on the structure and the task to be performed, updates the application data base, supplies information for inference purposes and also for the filter process which does the feedback generation.
  5. The filter essentially provides the feedback in the context of the command, application database, the user's knowledge database and the task to be performed.
  6. The knowledge database models the user's behaviour, is maintained by the inference processor and feeds into the feedback mechanisms.
  7. The application database is updated when tasks are performed.
  8. The feedback generation again goes through semantic, syntactic and lexical phases.

8.3.5 The Dialogue

Interaction consists of tasks involving the user and the interactive system. These tasks could take place in an interleaved fashion or in a parallel fashion, but for simplicity of discussion, we assume that they are essentially sequential, and thus an interaction (dialogue) can be considered essentially as a sequence of tasks. Tasks are hierarchical. At each hierarchical level we have the four tier model :-

  1. Conceptual
  2. Semantic
  3. Syntactic
  4. Lexical

Feedback for actions is required at all levels and all tiers. The structure of a task is, therefore, simplified to action-feedback. Task initiation could be either by user or by the system in any general dialogue. For example, a command input may be considered as a user-initiated task (action is command), while prompting may be considered as a system initiated task (action is prompting). Feedback is essential for all actions (in whatever primitive form). Depending on the analysis of the action, the feedback may be one of many kinds (error, semantic, syntactic etc). This model is a useful top-down decomposition for design, analysis, description, and feedback structuring of interactive systems. The representation also fits the designer as well as the User to a reasonable extent.

8.3.6 Suggestions, for Future Work

The following suggestions are made for future work:

  1. Refine the model for the purpose of developing a design methodology. By further refinement of the model its implications for the design of systems may be established, so that the designer can have concrete advice for achieving his task.
  2. Test the model against existing systems and other models. Trying to describe systems like SMALLTALK in terms of the model is a method of validation; relating it to existing alternative models is intended to show its specific point of view.
  3. Identify (human) parameters of relevance for the systems represented by the model. Symmetry between man and machine in the process of interaction is emphasised by the model; this strengthens the need for research in (human) parameters which influence this process.

8.4 DISCUSSION

Guedj:
Our attention has been drawn to the need for a conceptual framework and we have called the group whose work was just presented, the model group. There is an implied association that the conceptual framework is the model. The model group's work is based on three paradigms:
  1. It is a good idea to separate knowing and doing
  2. .There is a distinction between input and output.
  3. There is a domain of application.
How do you relate the domain of application to the user's model in this conceptual framework.
Krammer:
In this framework, the domain is an abstraction which exists. Both parties have knowledge about it included in their lower parts.
Hayes:
The distinction between knowing and doing I find interesting. We say we know things when we can talk about them. This is known as Declarative Knowledge, but there is another type. Suppose you say to an art student How do you draw a face?, he will reply, Well, you just do it.
Some knowledge is coded in perception, but it is not reportable knowledge. If we make the distinction for Man, how do we do it for the machine? Perceptual knowledge about viewing?
Anson:
There is an analogue in the computer world connected with the class or cluster approach to data structures. Two types of operation are available, one a state enquiry which corresponds to knowing, the second performs a state change and is equivalent to doing.
Krammer:
I used the terms knowing and doing which are the expressions used in the group. I have another interpretation. The lower part of the system is the knowledge of the domain, constants, data, procedures, functions, etc. The upper part can work on all these including procedures and functions. This is why the upper part is a higher level thing. That is why the upper part can control the interaction.
Engelman:
I prefer this second interpretation. One thing we have learned is that a great deal of our knowledge becomes represented as procedures. Therefore, I prefer this to the data versus procedure approach.

8.5 METHODOLOGY FOR DESIGNING INTERACTIVE PROGRAMS

When you say what you think, be
sure to think what you say.

-Muggles, Maxims

8.5.1 Introduction

This group was concerned with developing a design methodology that enables the design of good user interfaces of interactive systems. The most important feature of the methodology is that it consistently produces good designs. To this end, a characterisation of the design process is laid out.

8.5.2 Target Environment

The target environment includes the potential user and his task environment. The task environment includes tasks, methods and materials, of which the potential user has a conceptual model. For example, workers in a large corporation must communicate by sending various sorts of memos to each other. There can be several methods for doing this, the inter-company mail, using a telegram device and telephoning. Each worker is a potential user of the methods. The term potential user should perhaps be candidate user. It may not be possible to build a user interface for a particular candidate user. Alternatively, somebody might restrict the candidate user from accessing the user interface provided.

8.5.3 Post-Design Environment

The post-design environment contains a user and his task environment, which includes a machine. The user has a user's model of the task environment, which includes a model of the machine. The user interface is that part of the machine with which the user comes into contact (physically, perceptually and conceptually) and is, therefore, the only part of the machine that can directly affect the user's model. Continuing the above example, a message-sending system is designed for the corporation and terminals are installed for each worker. Thus, each worker becomes a user (and his terminal a machine). The user interface includes the physical terminal and its behaviour. The user's model includes:

  1. The tasks that he knows he can do with the system (including some user tasks, such as broadcasting notices).
  2. Various system entities (for example, messages, mailboxes, other users).
  3. System operations (for example, transferring a message to another user's mailbox).
  4. Methods for doing tasks with the system.
  5. The user's understanding of how to use various commands (for example, a user's simplified view of a complex command).
  6. The user's expectations about response times.

8.5.4 User's Model

Each user's model is affected by his prior experience, by his training and by his experience with the machine after his training. That is, the user's model develops over time. The following example shows different user's models of the same machine.

Consider a box (the Marble Machine) which, when a red marble is inserted, ejects a blue marble. A small child can have a model of this machine which says that the inserted marble is painted blue and then ejected. An adult, however, will have a model saying that the machine keeps the red marble and that the blue marble is a different marble. An example of change in a user's model is a user, who for the first time encounters a system crash. The notion that the system can unexpectedly crash is now part of the user's model.

8.5.5 The Design Process

The designer comes to the design task with tools and materials. Tools include task analysis techniques, system design capabilities (experience, technology, etc) and evaluation techniques based on models of interaction. Task analysis generates a set of criteria based on the potential user's task environment and on his capabilities (for example, response times, memory capacities, abilities to handle simultaneous tasks, etc). Examples of criteria which have wide applications are task completion time, difficulty of mastering the system and the range of tasks which the system can accomplish. Moran's keystroke model exemplifies the use of the time criterion.

The system design may be in the form of words, pictures or code. In addition, the designer forms a model of the post-design environment. Evaluation can be applied to both the design specification and the designers model and is based on the criteria. Materials include such things as interactive programming environments.

8.5.6 Observations

Task analysis and evaluation techniques are essential ingredients of the design process to ensure good design. User-machine performance must be predictable to ensure consistently good design. There are many sources of unpredictability:

  1. There may be many users' models of the machine (i.e. the design of the machine is inherently ambiguous).
  2. The hardware devices may vary from place to place.
  3. The machine's interface may be extensible in some way; it may be tailored by the user; it may be tailored by the designer; it may be adaptive to each user.

Note that these are all thought of as desirable features of machines, but they come at the price of sacrificing predictability. There may be a fundamental trade-off between a machine's predictability and its functionality.

8.6 DISCUSSION

Sproull:
You mention that task completion time is an important criteria. However, there are places where full device independence is important in order to reduce effort at getting a system running on a new machine. In this case, performance may not be a criteria. Any user interface is good enough as long as it can run.
Newman:
That is part of the criteria for the design.
Foley:
In the Marble Machine example, you say the users have different models implying different views of how the system works. Should this not be behaves. The term works implies knowing the internal mechanisms. In my view, the differences are implementation ones rather than differences in the user's model. The user model is that you put a marble in and one of a different colour comes out.
Newman:
There are two different models. In the first, you put a marble in and the same marble comes out (painted). In the second model, you put a marble in and another marble comes out. Those models are conceptually different.
Foley:
The way that you have just said it implies that the user's model has nothing to do with implementation.
Moran:
That does not mean that the user does not construct theories.
Encarnacao:
We need to look at other areas, such as industrial control systems where they do have complex models involving adaptive behaviour and other variable features, and see if those models are applicable in our area.
Negroponte:
In the proposal, the task environment is singular - it can be used for only one purpose. As soon as you use it to send telegrams, write letters, order food, play games, plan a trip, fix your calendar, it will not work. What do you do in this situation? Also, the task environment may be different at different times of the day.
Newman:
We did consider a variability of task environments as well as variability of users. Our main point is that task analysis should be taken more seriously.
Shaw:
It is difficult to disagree with your presentation but what do I do now.
Newman:
Readjust your priorities. We should be learning to do task analysis, evaluation, etc. We must appreciate the importance of this.
van Dam:
This is just an analogue of the programming field where Requirements Analysis and Validation is the important area everybody is talking about rather than syntax, semantics, etc.

8.7 SPECIFICATION OF DIALOGUES AND INTERACTIVE PROGRAMS

Welcum Gummy,
Welcum Muggles.
Don't tell us
About your struggles.

Good-by Muggles,
Good-by Gummy.
Take back home
Your empty tummy.

-Gummy, Scribbles
(Collected Works)

8.7.1.Introduction

The language goals were to provide methods for describing the user interface and the programs that implement this interface in interactive systems.

This lead to attempt to define the words user interface. An interface was generally defined in linguistic terms, with some assumptions about sequential, parallel and interleaved dialogues. The working definition of user interface generated some promising suggestions for some aspects of dialogue specification.

8.7.2 Specifying Dialogues and Interactive Programs

In the interactive world, we distinguish two interfaces to the computer. The first between the user or operator and the computer is called the User Interface. The second between the programmer of the system and the computer is called the Program Interface. Each interface needs a Specification Language. In addition, the User Interface provides a means to communicate with the computer by using the Dialogue Language. The Dialogue Language is handled by its counterpart on the programmer side: the Programming Language.

Ideally, separate specification languages for the two interfaces should be superfluous. One can imagine situations where the program specification itself could function as the program. With the advent of high-level and very high-level languages (of which even in Computer Graphics a weak echo is heard) this seems to us a very likely possibility. Ideally, therefore, we would try to collapse the languages on the program side into one language, the programming language.

On the user side, the situation seems to be more complicated. At this moment it is difficult to see how the unification could take place. So for the time being, it is accepted that there has to be a separate dialogue specification language.

More promising solutions can be offered for the dialogue language itself. Given a language with declarative and control structures of sufficiently high level, there is a definite possibility that these structures could be used as units of the dialogue language.

8.7.3 New and Promising Approaches

Some techniques related to the problems mentioned have been discussed in the position papers of Anson, Crestin, Eckert, Hopgood, Mudur, Shaw and Van den Bos. Some of the ideas presented there inspired the group to produce the ideas given below.

Between the user and the machine, there are two languages. The language LI defines input from the user to the computer. The language LO defines output from the computer to the user. It is felt that these languages for interactive programs are closely coupled. The dialogue language should consist of interaction units. An interaction unit is an interleaved pair consisting of a symbol string xi from the input language and symbols xo from the output language. The elements in xi and xo are input or output primitives, which may be empty. The interleaving of xi and xo is such that the ordering originally present is preserved. Therefore, this model describes a universal dialogue language. This model is attractive because it contains both input and output; it allows one to associate input with output and vice versa at any level of detail, and includes both sequential and concurrent dialogues. For particular applications, one may work with a subset of the language.

The language model is powerful enough to accept sequences of primitive inputs and outputs such as i1 o1 i2 o2 but also i1 i2 o2 o1, i1 i2 o1 o2 or even o1 i1 etc. It allows simple i-o sequences, question-and-answer sequences and even type-ahead. Appropriate operators (cf. Shaw, van den Bos) have to be selected to define specific sequences of inputs and outputs out of all the possible orderings.

In the programming language, an interaction unit also contains an action body, used to check the state of the program. Higher level interaction units may be constructed out of lower-level ones in a hierarchical manner.

It is clear that, in order to process the stream of inputs, something like a parser is required, such as, the one discussed by Van den Bos. On the output side, it is less clear what is needed or possible apart from primitive outputs.

A complete dialogue program could, according to the model, consist of a layered and hierarchic structure of interaction units, each with their own (possibly empty) semantic actions. One could logically see this as a tree of interaction units. Branches of the tree would correspond to logically connected pieces of dialogues, perhaps called dialogue modules. Flipping from one module to another means travelling up the tree and transferring to some other branch. Control might be distributed over the action bodies and the hierarchic structure would provide for low, medium and high level control. One of the options of the higher-level action bodies would, for instance, be to keep a record of all transactions that have taken place. This record would be a trace in the sense of Nievergelt's paper.

8.7.4 Future Work

The major issues are refining the conceptual model into a more precise set of possible specifications; mapping the conceptual model of a dialogue into a program; finding out what kind of programming environment should be provided.

8.8 DISCUSSION

Sproull:
Have you considered time?
van den Bos:
It has been raised, but not resolved.
Engelman:
What do you mean by formal dialogue systems? Does that include verification?
Crestin:
A formal dialogue system is a formalism for defining a dialogue in interactive programs.
Shaw:
We did not discuss verification. You can study specification without verification.
Krammer:
Interaction has been defined as a style of control. As this model does not contain control, it does not separate interaction and batch.
van den Bos:
It will have to include control at some stage but we did not want to consider it now.
Hayes:
There is an implication that the input language is the same as the output language. This rules out many useful systems. For example, when teaching physics, you might input a word problem to a student and expect a picture back as answer. It is essential, therefore, that the input language is different from the output language. From what I understand of your grammar, it appears as though anything can happen.
Shaw:
Yes. Except that I and O are paired.
Hayes:
Except when I and O are omitted.
Shaw:
If you have a clock that appears on the screen every 40 seconds that is a valid interaction. No input and many outputs.
Krammer:
I can see that a description of the input language is useful for analysing input and, similarly, for output. But what does connecting them do for you?
Shaw:
Three things. It should be useful for programming dialogues, telling you what the system does for you and designing the system.
Krammer:
I have the feeling that you need to add something that defines what is happening in between the input and the output before you can describe a dialogue.
Shaw:
I agree, you have to specify semantic actions somewhere. I think this is partly a matter of interpretation. The implication is that we are taking too narrow a view of input associated with output without any context or state activity or structure of what went on before and that we are suggesting we can ignore that.
That is a wrong interpretation. We are saying that it is useful to break them up in terms of input and output and to associate them. That is all the context we want and anything executable can be within that framework and that we can probably handle the input language part of it reasonably, but the output will be fierce. I think we can handle all the parallelism anybody asks for within that framework including multi-media input and output.
Hayes:
My problem is that you do not seem to have gone far enough. It appears that you have no structures.
Sproull:
Formally, there is no problem in representing the complete dialogue as a string of symbols from a single language. Similarly, you can represent the complete dialogue by a sequence of symbols from the two languages LI and LO. However, the spirit of that description does not capture what is going on, activities overlapping, etc.
Shaw:
The answer to that is our concept of shuffling which allows input and output pairs to get out of step. Other people have models of parallel computing in their position papers, which give good descriptions of parallel processes. Our proposal was to try and apply those ideas on top of the I/O pair model.

8.9 PROGRAM ENVIRONMENT

The only good mushroom is a cooked mushroom.
-Muggles, Maxims

8.9.1 Introduction

The construction of interactive programs is an activity of engineering design and implementation. While we have little experience in carrying out this task, its obvious counterpart in the more established engineering disciplines can provide us with a methodology that has been developed over hundreds of years.

In conventional mechanical or civil engineering practice, the design environment provides two categories of facilities - a list of component parts, and a collection of previous designs covering the range of products of interest. Rarely, if ever, is a new design constructed from scratch, with new designs for the individual components. Even given the components, new designs are almost always minor perturbations of some already existing design.

In the area of program construction, the lesson of re-using component parts has largely been learned. The use of program modules, subroutine packages, high level language constructs, etc are examples of reusable component designs. However, the formal recognition of the technique of constructing programs by making relatively minor modifications to old ones, is yet to come. Instead, the programming community either relies on recommendations for programming style and structure, or more frequently, builds every new program afresh. For example, subroutine packages are rarely made readily available at the source level and even when the source is available, it is rarely written with a view to easy modification, being easily understood etc at the degree that is necessary to allow such activities to be routinely carried out.

The case is argued for the development of interactive programming environments, that support far more than just a programming language. The main requirement of a programming environment is that it be conducive to the construction of good interactive programs. We believe that the critical part of this is concerned with the structure of the resulting program, not so much with its components. The uniformity of representation resulting from the fact that we have an interactive programming environment for the design and construction of interactive programs (as contrasted with designing, say, bridges) enables the environment itself to set a style for the resulting program.

The other main benefit of a good programming environment is that it makes programming easier. Making programming easier enables more iterations and testing of specific generated interactive programs, thereby relaxing the need to predict all aspects of performance and enabling the production of better programs. This is the main argument for many CAD systems.

The environment should support integrated editors, debuggers and the like, as well as exemplars of good interactive programs. There will not be a need for a great number of such exemplars, since we believe that the number of fundamentally different interactive program structures is quite modest. On this assumption, it should be possible to specify a set of well understood and agreed upon exemplars, which will constitute a set of informal standards. Such an achievement would greatly promote portability, in that all programs would conform to one or other of the standard exemplars. Note that this approach dismisses the notion of achieving a single standard program structure, as has been inferred by some. It is thought that this interactive programming environment will best be placed on hardware specifically built for interactive use.

8.9.2 The Approach

We believe that these goals can best be achieved by the provision of a range of tools. Many of these are already widely available. What is lacking is a unifying theme. We consider that four classes are required:

  1. Integrated_Frameworks. A unified means of access to the tools (e.g. SMALLTALK, BASIC and APL environments).
  2. Warehouse. A collection of exemplars of systems that can be withdrawn from the warehouse and used as a basis for building new systems.
  3. Museum. A wide ranging collection of good, bad and indifferent systems that can be viewed and analysed by system designers and users.
  4. Software Tools. Editors, browsers, compilers, etc.

8.9.3 Integrated Frameworks

Most people are familiar with the environments provided to support BASIC and APL. SMALLTALK is an example in which it is hard to distinguish between the language issues and the environment. On the other hand, FORTRAN environments are always entirely disjoint from the FORTRAN language. What we seek is a framework for the software tools and to manipulate program, data and other entities held within the system.

8.9.4 The Warehouse

This is intended to be a depository of good practice. It should contain programs which provide widely admired solutions. These will then be used as starting points for the constructions of new systems. Typically, most of the flesh will be removed and a new body constructed on the old skeleton. What is important is not the details and finish of the warehoused program, but its shape or control structure. These skeletons will become the standard ways of doing things. We do not envisage that there could ever be a single standard structure, but rather that there will be a range of useful generic forms, each of which is a standard.

The warehouse may also include rules of thumb as has been done for compiler writing and operating systems. Like cookery, interaction is currently 10% theory and 90% beliefs. A rule of thumb might be execution time is proportional to the number of keystrokes or every dinner guest needs half a pound of meat. Some examples of beliefs are:

  1. All systems should respect whether a user is right or left handed.
  2. All systems should be self-explicating to a degree of elaboration commensurate with a particular user's experience with the system.
  3. State diagrams of interactive systems should be planar.
  4. Users should not be able to make errors in the use of the system.
  5. Maximising opportunity for sensory inputs (both machine and human) is good.
  6. Good interfaces do not cost (implementation and life cycle) much.

8.9.5 The Museum

It will be many years before we can codify good practice and style, so we need a method of illustrating what is good and what is bad. For interactive systems, this illustration has a temporal component and can only be made dynamically. We, therefore, propose a Museum, probably in the form of video discs, of past and present systems in action. This would allow hard won experience to be spread through the community.

8.9.6 Software Tools

In the future, so many people will use computers that we shall need personalised software tools. Such tools must be interactive with the freedom to proceed in parallel performing several tasks at once. If necessary, the tools should be able to deal equally well with both program and data. They should be able to manipulate entities at levels varying from the high-level language module to a single instruction. It should be possible to work at different levels from fast browsing to detailed examination.

Programming languages should allow the production of basic modules to be executed and control programs to indicate how the modules are accessed. We will almost certainly have a need for structured design and the ability to describe abstract data types and to map these on to real variables. Step by step execution with the ability to vary the speed is important.

Editors will need to enter, modify or update both programs and data. We shall need many editors depending on the objects to be manipulated. Also, people use different methods of editing and will need to work at higher levels (page, line, segment) and lower levels (character, point).

Viewers are required to present any piece of information in a variety of formats. Text, for example, could be displayed as a set of characters, a picture or even a spoken sentence. Again, it should be possible to present either the full set of information or some specified subset.

Administrative aids are required, for example, to automatically derive a menu from the structure of the different modules to be used. Automatic documentation, cross-referencing, program indexing etc are also necessary.

Debugging aids are needed both to find mistakes in the program and the data. We need backtracking mechanisms to allow us to move backwards in time. Structuring mechanisms are required to allow us to arrange objects for easy access and to relate objects for easy manipulation as a whole.

9.9.7 Conclusion

In conclusion, let us review what has been said. We have not commented on what constitutes a good or a bad interface. Nor have we proposed any specific solutions to any particular problems. Rather, we have set out a way forward which we believe will lead to the design of better systems. Much of the infrastructure is already widely available, some unified systems are already in use and we believe that this approach would now be widely and profitably adopted.

8.10 DISCUSSION

Sproull:
We are certainly unhappy about user interface design today and yet it is hard to find out what it really was that made JOSS and SKETCHPAD good interactive systems. I agree, we need to build on previous design experiences.
Sancha:
I would like to make a strong assertion that there is only a small number of equivalence classes of interactive systems. There is not an infinite variety. By small, I mean something of the order of 20. The structure of a system is influenced, probably determined, by the internal model of the application. For example, the structure of text is repeated in a text processing system. There is only a small number of logical structures and hence, only a small number of possible systems.
Hayes:
I like the warehouse idea. It is reminiscent of the American National Theatre store of old choreography.
van Dam:
What you have said is a mixture of idealism, cynicism, and pragmatism. We should try to come up with useful things that we believe in most of the time to help those who do not have the experience. Work on the case book, beliefs and rules of thumb is important. However, it should be clear what are just beliefs and what are rules of thumb.
Kay:
I liked the idea that interaction is a style of control. However, it is hard to codify this style. You cannot teach it but you can show people examples of what it is.
Herzog:
How do you decide what is good or bad in the museum and warehouse? Do you take a popular vote? There are no criteria for evaluation.
van Dam:
I agree, I do not think it is easy to pick out good/bad systems. There are differences in style. Can you get a consensus on what are good and bad styles? Why not try and measure structures and styles in terms of understandability, modifiability, reliability, portability, efficiency, etc. Can we satisfy all of these with one particular style?
I think the warehouse is a good idea. No one is doing differential programming except in close knit groups.
Baecker:
The museum appears to contain the equivalent of the Works of Shakespeare. What is very necessary is to have commentaries in the museum on each exhibit.
Newman:
It has been stated that Program Structure was the most important aspect of building interactive systems. I disagree. Program structures are not as important as evaluation techniques. How do you decide what to use if you can't evaluate the system?
van Dam:
Evaluation process is necessary also. One is no good without the other.

8.11PORTABILITY AND DEVICE INDEPENDENCE

A turtle should take fright
at the sound of a boiling pot.

-Muggles, Maxims

8.11.1 Introduction and Approach

The original assignment of the subgroup was to answer the question Can we achieve portability and/or device independence at the user interface? We accepted such an assignment as a reasonable means to a much broader goal: to reach some conclusions about interaction in general. We have the firm belief that such conclusions can be reached by studying interactive graphics applications, an area in which we have enough knowledge and one sufficiently rich to serve as representative of the whole area of interactive programs. Consequently, we changed the assignment to the following one:

How should interactive graphics programs be structured to achieve portability and device independence?

In the course of our meetings, discussions have been more and more focused on the first half of this assignment (how to structure interactive graphics programs), while the issue of portability was relegated more and more to the background. We started by looking at those position papers and presentations which either propose some specific methodology for forcing some program structure upon the application programmer, or propose some model for interactive environments. In the first class, there are the papers of van den Bos, Mudur and standards proposals like the Core and GKS; in the second class there are the papers of Anson, Sancha, Rosenthal, Baecker and the presentations of Kay on SMALLTALK and Newman's film on MARKUP.

8.11.2 Discussion

Virtual Input Devices (for example, pick or locator) have been proposed as a way of isolating application programs from the operating characteristics of particular physical input devices. There are two principal motivations for doing so:

  1. Program structure is made simple and clearer, thus enhancing maintainability and portability of the application program.
  2. Peculiarities of specific input devices are not made an integral part of the application program, thus enhancing device independence.

The immediate benefits of moving from an environment where physical devices are directly accessed to an environment built around a particular set of virtual devices has been demonstrated in practice. Some advantages include:

  1. Program structure [cf the examples in the paper by Bergeron, Bono and Foley in Computing_Surveys 10.4] is improved, even if it is not the best that might be done.
  2. Device independence across interactive work stations with similar input tool facilities is attained, even if serious problems arise when the work stations differ dramatically in their capabilities.

However, several position papers (Anson, Rosenthal, van den Bos, Baecker) have argued that in other, more important aspects, virtual devices (in particular, the particular set of virtual devices suggested in the standards proposals like the Core System and GKS) are of marginal help at best, and are perhaps even detrimental in writing truly portable, useful and usable programs.

The notion of virtual input devices, as currently used, tends to defeat portability. In particular, no single set of virtual devices could be used in a manner that would permit a desirable degree of device independence.

Application-specific virtual devices can be very useful. If each application designer creates a set of virtual devices specifically appropriate to the application, such a set is a good top level for a hierarchy of virtual devices. Such a hierarchy terminates at the set of physical devices actually available. As an aid to application programmers, libraries of virtual devices should be available. Since several such devices are known to be widely applicable, the basis exists for designing such libraries. However, such device definitions should be at the application level so that they may be modified, replaced, or supplemented as required by each application. Methodologies for providing such a facility are suggested in position papers by Anson and van den Bos.

Some alternative approaches to interaction do not include the notion of devices, as such, at the application level. These approaches are summarised in position papers by Hopgood, Rosenthal, and Sancha, among others.

The principal objections raised regarding the incorporation of fixed virtual devices into support software are:

  1. An application programmer does not typically design the user interface in terms of abstract virtual devices, even when they must be used. Instead, the physical interface with the user is defined, necessarily, in terms of those particular physical devices which are used on the system to realise those virtual devices. In effect, actual device characteristics tend to determine the choice of virtual devices. Consequently, the concept of virtual device contributes little (if anything) to the design.
  2. The simulation of virtual devices on real devices effectively predetermines vital parts of the user interface. Different applications (and even different users) require different interfaces. Therefore, it is inappropriate to leave this part of the design to the developers of support software. This becomes especially apparent when different installations provide different user interfaces to the same virtual devices.
  3. Working with a fixed set of virtual devices discourages the introduction and use of new physical devices, especially when the virtual devices in the set are modelled as closely to existing physical devices as those in the Core and GKS.
  4. Some useful characteristics of existing physical devices cannot be accommodated by any single fixed set of virtual devices, as explained in the position paper by Anson.

There is also a logical problem with any particular set of input primitives. Such a set should form an orthogonal base for input functions, i.e., it should not be possible to simulate any one in the set by a subset of the others and it should describe every input function conceivable as a composite of these primitives. the first requirement is certainly not fulfilled; we already expressed our serious doubts regarding the second.

Replacements for virtual devices were discussed. The proposal that seemed most acceptable allows the application programmer to define a hierarchy of abstract, application dependent input tools. When a program is moved to another system or operated from another workstation the binding of the abstract input tools to physical input devices must be and ought to be respecified. The mechanism for redefinition/respecification remain to be explored in full and must be analysed for ease of use, simplicity, flexibility, completeness etc.

An opposite view in favour of the current proposals relates more to the goals of standardisation per se than to the issues expounded by the advocates for a replacement of the concept of sets of primitive virtual devices. the standardisation arguments proceed as follows:

Propositions
  1. Standards codify current (codes of) practice.
  2. The Core and other standards have been proposed as the first approximation (an interim) to a reasonable standard to evolve over time as designers and users evolve their own understanding with practice.
  3. As a compromise we know that there are many areas in the Core (and other proposed standards, e.g. GKS), which are far from perfect, optimal, etc., but since it is based on current practice, it will be both usable and implementable (existence proofs abound). It will be better than what many people in industry, etc are forced to use now and seems to appeal to applications programmers, if not to the experts.
  4. If we put a hold on the standardisation effort now to fundamentally reopen/re-examine an area like input (tools), precious time will be lost and more confusion will be generated than is necessary or productive.
  5. Even the people who want to do research with graphics need a standard in order to test their innovations. Meanwhile, there are a number of modest examples, realistic in certain environments, of new developments (for example, hierarchies of input tools) which ask for relatively small changes. These should be considered carefully by standardisation committees.
Recommendations
  1. Haste makes waste, but in the real world, something is better tan nothing (provided we can change it)! Therefore, unless one can prove there is a fatal flaw in the input area (or any other), one should let the first standard(s) get out so we can experiment with them and learn from them, while the hard research work goes on to provide us with better conceptual models and techniques. It is to be noted that evolutionary standards have worked in precisely this way in the network community. One learns by doing.
  2. Caveat to users of logical devices: If your major design goal is to put your applications program to systems with different interactive devices, then structure your program with a level of abstraction of virtual devices above those provided in the Core System. This provides a clean level at which to adapt the application program to the particular devices of another environment.

Overall, the subgroup felt that, in principal the suggestions put forward in favour of application-defined input tools showed much more promise in serving as a long-term basis for graphical interaction, but that the goals and purposes of standardisation addressed a somewhat different issue, namely, the costs and benefits of a standardised situation based on current practice compared against the current, unstandardised situation, or a standard based on new, untested concepts.

Following the discussions about input tools and virtual devices, we then discussed how the applications programmer gained access to signals resulting from the usage of the input tools by the operator of the program. After exploring several paradigms, all similar to the wait for event mechanisms of GPGS, the Core and GKS, it was suggested that we try to apply the lessons learned from the presentations about SMALLTALK and from the work of other groups. Namely, we focused on two assertions:

  1. Interaction is a style of control and is accomplished through a language.
  2. Good interactive application programs can only be developed in a good interactive environment, where the environment is provided by the system.

Hitherto, virtual devices have been regarded as the fundamental primitives for input. In previous paragraphs we have pointed out some objections against this notion. Here we present an alternative. Regard a language processor as the fundamental part of an interactive system.

Every interactive program has an input language. Consequently, each program contains a language processor in some form or another. Currently, the application programmer has to write this language processor (which in many applications is not very explicit) and therefore parsing is often not done in the best way possible. Substantial experience has been built up in writing language processors of various kinds; this experience should be packaged up and made available to the applications programmer in a device and machine-independent fashion. We therefore propose to regard the language processor as the fundamental part of an interactive system. Interactive graphics provides a framework into which the application programmer fits the modules corresponding to the appropriate semantic actions for his application. In this way, modularity and good program design is encouraged. In effect, the programmer writes modules and no longer writes the main program.

If this approach should turn out to be feasible, at least two issues have to be decided namely, what kind of processor should be used, and how is the input language to be generated by the physical devices. The position papers contain several proposals regarding the first issue: the tools of van den Bos, the table driven language processor of Sancha, the production systems of Hopgood and the devices of Anson. It is not necessary to decide on any specific processor; several can be provided to fit different applications. There is one proposal regarding the second issue, to be found in the position paper by Sancha. The real devices should be mapped onto an application-specific input language by table-driven string generators, which are real-device specific. The tables can be changed dynamically; changing a table may correspond to changing the mode of interaction.

Sancha's restriction to a single sequential input stream makes his processor the simplest of the class of processors, but raises an important question:

What distinguishes those applications which can utilise the simplest kind of processor from those which cannot?

The aspects of interaction which were suggested as requiring more complex processors included operator perceived concurrency and dialogues which can be interrupted and later resumed. This question was not resolved, nor was time available to provide full documentation of the advantages and drawbacks of this approach to the interactive programming environment. The sub-group felt that, despite the treatment of application control over echoing and feedback in Sancha's paper, they had not considered this area in sufficient detail.

8.11.3 Suggestions for Future Work

The main suggestions for future work are to investigate alternative methods for binding physical devices to application specific functions and investigate the relation between specific applications and language processors appropriate to them.

8.12 DISCUSSION

van Dam:
In analogy with extensible languages, all you need is the extensibility mechanism and you can bootstrap from ground zero. However, what finally did become adopted was to start from a not uninteresting set of basic primitives and bootstrap from there. I would hate to see no virtual devices proposed at all.
Rosenthal:
Show me an applications programmer who designs his user interface in terms of virtual devices. It cannot be done. He thinks in terms of the terminal that he normally uses. In our system, we have a great deal of code built on our own group's unconscious model of what a locator is. It is assumed that it is like a pair of Tektronix cross-hairs that move all the way across the screen. The system will not work on anything other than the Tektronix it was originally written for.
Our second point is that in any interactive system, there is going to be some input language and, therefore, it is going to need to be parsed. If the applications programmer is good, he will write his own parser. If he is not, it will be a mess. So why not give him a good parser which forms the central part of the system. This is the main point in Tom Sancha's position paper.
van Dam:
Sancha has a classical minimalist position. It works fine in Tom's environment, extremely bright people. They are not only builders of Ferraris, they are drivers of Ferraris also. This is too much to ask of people who construct systems in the real world. I agree with extensibility, but give them something to start with that they can build on top of. The UNIX environment, which is the kind of thing Tom wants, was built for specialists. I am arguing for a tools environment plus a base.
Rosenthal:
Good applications programmers have been known to build bad frameworks, it is better to provide them with a good environment. It is suggested that there is not an applications program, rather a set of modules called by the input parser as determined by tables.
Sancha:
We are arguing that any particular set of concepts such as virtual tools does not constitute an environment. We need to have a system for putting the parts together, a rich programming environment, just providing a set of tools is not that. A syntax environment also needs to be supported.
Herzog:
You are suggesting a framework into which the pieces can be plugged, which could be a FORTRAN main program with a graphics subroutine package etc.
Bono:
We are making a stronger statement that the environment does not have to be as rich as SMALLTALK, at least initially, to be useful. It could still be less rich and provide the right way for designing interactive programs rather than the traditional approach. We do not think that applications programmer writing a main program and calling utilities is the way. We are advocating a different approach.
Rosenthal:
We agree that; interaction is a style of control. Our applications programs use an environment at present based on batch processing. We are saying that it is possible to put in a generalised table driven parser for the input and, by doing this, it gives a programmer a lot more than he gets in a subroutine package.
Baecker:
Can you do that without sacrificing style and flexibility?
Rosenthal:
Tom [Sancha] firmly believes you can, others are less convinced. You can write the application as a set of semantic actions on the model. There is a Table Driven parser which interprets some input language, which is application dependent. The parser calls the application modules which the application programmer has written.
Foley:
Virtual devices is still the way of getting into the parser from physical devices. If in a certain environment, logical devices are being simulated in a way unpleasing to the user then you can factor that logical device out of the parse table. In your parse table, you can provide your own mapping of the logical devices that you do like, the ones that are simulated well, on to whatever tokens you need in your input language.
Rosenthal:
Yes, if you are binding at run time between particular hardware device actions that the user makes and particular bits of input language. In effect, there are going to be different modes of operation and the particular form of the Parse Table defines the mode.
Anson:
Of key importance is that this allows the user to get at all the features of the devices rather than a subset defined by the Virtual Device drivers. The user always sees real devices. Applications programmers concerned with a good user interface will want to see those real devices.
Rosenthal:
I forgot to say that this is a framework not the framework. Some classes fit well, others do not.
Baecker;
Can we characterise what user interfaces and dialogues can be handled by this approach and why. I saw the idea first in a paper by Larry Roberts entitled The Graphical Service System with Variable Syntax back in 1965 and published in the IEEE. We, ourselves, were thinking about it at Lincoln Labs but never did it. It is possible that the dialogue will become very stilted with that kind of formalism. It is possible that some new formalism, like Alan Shaw's, if suitably implemented as a parser, might lead to a sufficiently rich dialogue. It is important to categorise when it does and does not work.
Hopgood:
I do not believe that you can do this without stilting the dialogue. It just does not work if you use a linear single stream parser. Anything that involves more than one action in progress at a time, and it breaks down.
Newel1:
Could you give me an example.
Hopgood:
Suppose we wish to draw a line between two points by moving to the first and then drawing to the second. This can produce tokens that can be parsed by the linear string parser without any trouble. Now suppose we have input the first point and we need to zoom in to position the second point accurately. This method then breaks down unless you specify, in the middle of any sentence, that tokens from any other can occur. This makes the parser terribly complicated.
Newell:
That was a good example. It is difficult to build systems with that kind of interaction in which you have a middle sentence belonging to a different command and then return to original. Normally you have some ad-hoc polling which gets woven into the structure of the program - a rat's nest. At least if things are put in a parse table, you can see where it is.
Hopgood:
But the parse table grows to infinity. For example, see our position paper. The states grow. You just cannot have a single stream lexical/syntactic parser. It must be more complex. You must go to something like Production Systems for multi-channel input. The point we were making in our position paper is that this approach will not work.
Shaw:
Theoretical point - if you have a general parser for that particular problem then it will grow exponentially in size. You will have to write something specific. Table driven parsers are not the way to go in the general case.
Rosenthal:
If Bob [Hopgood] wants Production Systems, fine. We want to use linear parsers.
Hopgood:
They just will not work.
Sproull:
I want to test an hypothesis. It strikes me that one of the strengths of this proposal is that one need standardise nothing. The parser is not device dependent and can be written in your favourite programming language. It need not be part of either an in-house or out-house standard, it can be just part of the application program. It is just like one of those tools, the sorting package, that you pick up from the library and adapt for your purpose perhaps. If the problem becomes exponential, then you can do something else. The virtue of this idea is that it requires standardising essentially nothing. It is a recommendation of how to structure an application program that will be interactive and this helps portability significantly, but it does not require all parts of the world to subscribe to a particular view of parsing.
Anson:
There are two issues being debated. One is the notion of supplying a parser, the other is whether we should standardise on a fixed set of virtual devices. I personally view them as quite separate issues especially as I do not like the parser idea and I would like to get rid of the idea of a fixed set of virtual devices.
Baecker:
The intention of my earlier suggestion was that it is a 15 year old idea, and do not let us shelve it for another 5 years. Let us take the idea and explore it. PhD students can work on it from Los Angeles to Budapest. Serious research work is required. With regard to what Ed Anson said about a fixed set of virtual devices, it is not the concept of virtual devices that is bad, but the number of virtual devices that have been provided. There should be many more virtual devices (see my position paper). I agree that standardisation was premature.
Sproull:
I thought that this might be a signal that this group wanted to generate fairly strongly. It is a concrete thing that has a lot to do with current practice and standardisation. If people think they are on the wrong rail, then they should say so. We should try and find out what support there is for this idea.
Newman:
I regret that this discussion of parsers has come so late. In contrast to Seillac I it is very difficult to fall back on good examples. It is easy with graphics packages to give examples, but with interactive systems, it is extraordinarily difficult. Progress will be slow so long as we cannot refer, as Bob Hopgood just did, to easily understood examples. We have seen a lot of examples of SMALLTALK in which parallelism in the user interface is a very important part.
Herzog:
I have detected that the concept of parallelism and the amount of it is an issue. It has been stated that FORTRAN programs using READ and WRITE have no parallelism whereas SMALLTALK and the systems presented by Nick Negroponte imply a high degree of parallelism.
Rosenthal:
We are considering methodologies not to promote good ideas, but to prevent bad designs. Our recommendations are aimed at preventing bad design.
Anson:
There is no way we are going to resolve this, there are two points of view and neither group is going to turn the other around so that we can come to a consensus. Those of us who think that a fixed set of virtual devices is not a good idea should come up with an alternative and those who think they are a good idea should produce a defence.

8.13 QUESTIONS RAISED AT SEILLAC II

Below is a list of some major areas discussed (or mentioned and not discussed) at the Workshop, and a list of questions associated with these areas. Some questions pose topics for future research. Others request a clear understanding of a formulation of current practice as a prerequisite for undertaking such research.

8.13.1 Models of Interaction

What is interaction? What is an interactive system? An interactive dialogue? What different models of interaction (framework to describe interaction) have been proposed? What are their essential similarities and differences? Can they be unified as completely as the working group implies? In what sense do different models of interaction affect our construction/description techniques, our programming environment, and our user interfaces?

8.13.2 Interaction and Reaction

What are the essential differences between truly interactive (mixed initiative) systems and those that are only reactive? Have any of our graphic systems been truly interactive? Do we require a true artificial intelligence in order to build truly interactive systems? Will such systems require different construction or description techniques or programming environments? If so, how must they differ?

8.13.3 Formal Construction/Description Techniques for User Interfaces

What formal techniques have been proposed? Is there a difference between techniques for construction and techniques for description? Is there a difference between their application to interactive systems and to user interfaces? If so, into what categories do existing techniques fall? Have these techniques been studied in depth theoretically, or implemented and studied in depth empirically, or only used in 2 or 3 toy examples? How do the techniques work? Where do they fail? Why? Does their use stilt the interaction style or restrict its richness?

8.13.4 Programming Environment

What characteristics need a programming environment have in order to facilitate the construction of interactive systems? In what sense are the needs of the programmer of interactive systems different from those of other systems or application programmers? What are the essential features of those environments that are reported to be successful (for example, SMALLTALK, Sancha's system, etc)? How do these features relate to user interface design?

8.13.5 Towards a Methodology for User Interface Design

What methodologies for user interface design have been proposed? (For example, Chapter 28 of Newman and Sproull, 2nd Ed.)? How do they compare? How good are they? Can they be evaluated in any rigorous sense? If so, how? Can they be translated into design cookbooks or codes of practice that are usually applicable in a straightforward way? If so, what are the resulting codes of practice?

8.13.6 Evaluation of User Interfaces

Can user interfaces be evaluated? If so how? What theoretical framework and empirical methods are applicable (for example Moran's}? What results have been achieved? Does such research or our collective experience and intuition suggest some principles, guidelines or rules of thumb that can be applied to develop a design methodology as in the section above? What are these principles or guidelines?

8.13.7 Portability and Device Dependence

What does portability mean? How do we measure portability? Are there different degrees of portability? What empirical studies of the portability of interactive systems exist, and what do they teach us? What are the effects on portability of the choice of interaction model, formal construction/description technique, programming environment or methodology of user interface design? Under what conditions is portability important or not important? Must we give anything up to achieve portability? If so, what?

8.13.8 Standardisation

Is the development of an interactive graphics programming standard premature at this time? If so, what are the costs and benefits of going ahead anyway? Could we stop if we wanted to? If we stopped, what should be achieved before we go ahead?

8.13.9 Future Trends

What will be or should be the effect on interactive systems of new user groups and purposes, new input and output technologies, more variegated interactive media, and more intelligent software? What are their costs (forecasting over some period) and benefits? How do we evaluate or measure the benefits, and distinguish that which is deep and substantial from that which is dazzling yet superficial?

8.14 MUSINGS AND RESTATEMENTS OF MOTHERHOOD

Fat sounds, thin sounds,
Sounds that make you start;
Queer sounds, near sounds,
Sounds that chill the heart.

Approaching sounds, encroaching sounds,
Of someone stealing near,
Creeping sounds, weeping sounds,
That make you shake with fear.

-Gummy, Scribbles
(Collected Works)

8.14.1 Interaction

  1. What is the essential(ly) different ingredient in interaction continues to elude us; nor have we agreed on whether it is necessary to know what interaction is. (For example, should batch be a proper subset? Can one form a proper model of interaction without knowing it?) Is it sufficient to know it when we see it or should substantial effort be expended to obtain crispness of what may be theological and semantic issues? Is the notion of dynamics not relevant?
  2. Should one think of human/human interaction as the only useful/natural paradigm to be approached in the limit, with the Turing game as the ideal? What happened to the old paradigm of the partnership, each member contributing his strengths (roughly intuition/pattern recognition versus enumeration/number crunching).
  3. What is/should be our universe of discourse? Conflicting goals: delimit to make progress, expand our horizons to deal with (immediate) future technology and the new interaction modes it implies. Should we start by revising the ping pong model for interleaving or by considering the most general possible, multi-input, multi-output, without causality as a necessary constraint and many interrupted parallel processes (like our Seillac deliberations) possible? (Can most designers, let alone users, cope with such complexity? How? (For example, using what paradigms?) Is a Unified Theory of all modes of interaction possible (or desirable)?
  4. Lots of hard work remain on formal meta languages/descriptions for interactive systems and interactive processes (they are different!) They should address all levels of the user interface from hardware on up, and profit, as much as possible, from analogies to layered architectures such as network protocols (correspondence between corresponding levels, equilibrium etc). How should causality, the time domain, and the mode/technology be factored in? What does (close) coupling mean? Should not there be separate specifications and publication languages, the latter with dynamics/simulation such as video or movie? At least with decent static graphics, not just text!)
  5. Metrics for evaluation, based on controlled reproducible experiments, factoring in user, implementor, and system (resource) parameters.
  6. We need a design methodology which is properly rooted on psychological principles from cognitive psychology and ergonomics, as well as standard engineering design.
  7. The Warehouse/Toolkit/Taxonomy of Interactive Programs - we learn from example. We need to try to find classical program structures, styles of interaction, and interaction gimmicks and techniques - find equivalence classes. For example, what are the pros and cons of disappearing menus as used in SMALLTALK? When will they work and when not? We need to make them available, publish properly, list success and failures. Show the system itself so we can get the feel of it. We should sponsor standard loaner films and cassettes.

8.14.2 Program Structures

Program structure is most important and the least well understood concept in terms of how to choose the appropriate/best design. What good structures and styles are required for:

  1. Understandability, Modifiability, Reliability.
  2. Interactions and their Specifications.
  3. Portability/Device Independence.
  4. Ease of Implementation.
  5. Efficiency, Response time etc (it will continue to matter)

Are these conflicting goals? For example, (parser) table driven interaction models are very attractive - where do they apply, with what cost/benefit? How do they fit with relational database table models? Are object/simulation models and/or process models the most natural - how do we approach them with our current tools?

8.14.3 A Methodology for Making Progress

  1. How do we make progress in implementable increment steps from our antiquated batch FORTRAN/DVST world to the future world of, say, object oriented Ultra High Level languages and their integrated, rich environments (hard, firm, and software), where we primarily do incremental programming? Even if we could all agree on what we want, how do we implement it, teach it, get it past the commercial and human inertia to get it accepted? Are the new paradigms more or less complex than the old? How can we make concepts_/model_ sufficiently simple to be accepted?
  2. Since we do not have a single publication medium, could/should we have an informal newsletter, mail (progress) reports? Will there be a Seillac III? How will it be prepared and run?

8.14.4 Interaction Problems

We need to identify some other hard interaction problems:

  1. 3D input and feedback.
  2. Techniques for having large screen facilities and small (personal) screens.
  3. Having designer and user understand each other to work towards common goals.
  4. Integrating off line technologies such as digitising.

8.15 THE FUTURE

No matter where There is, when
you arrive it becomes Here.

-Muggles, Maxims

8.15.1 Introduction

Whenever you talk about what will be available in the future, there is always a feeling that it is too expensive and too far away to have any relevance. The following lists indicate those facilities that will be available in the period 1979 to 1981. Consequently, the future being discussed is, in effect now. In the real future will be bio-cybernetics, EEG and other entities that will allow you to think graphics and it appears!

8.15.2 Input

  1. 3-D magnetic sensing and sonic location
  2. Vision - moire type - 3D. There should soon be very few problems in digitising 3-D models
  3. Video digitising - it is easy
  4. Touch and pressure sensitive displays
  5. Six-axis pressure sensitive joy sticks
  6. Microphone - voice switch - voice recognition - relatively successful continuous speech production now
  7. The Labjacket - should be available by October - will have sensors all over it. Every button will know where it is.

8.15.3 Storage

  1. Memory is getting cheaper - the Japanese have 1M bit chip,
  2. Optical Video Disk. The cost of a video disc is now only $695 plus $45 to make it random access. If you cannot do it yourself, you can get for $3000 from a vendor.
  3. Flat panel displays which are readable as memory should be available by 1981/2.

8.15.4 Output

  1. Large format displays (such as Hughes Light Valve Technology which should be available in 6 months)
  2. Flat portable lower power (waterproof) displays
  3. 3-D displays
  4. Laser projections (used on clouds already outside Philadelphia - image was 7 miles by 7 miles square)
  5. Binaural sound is a winner
  6. Voice output - digitised, synthesised for example, Speak and Spell)

8.15.5 Retarding Effects of Existing Environments

The purpose of this section is to avoid inadvertently creating barriers to progress in interaction. The first major retarding effect is the wide spread use of storage tubes. The second is the CORE. It is hindering progress because people stop working on other things, thus it is hindering irrespective of its quality as a standard. Thirdly, limiting one's thinking of interaction to graphical and alphanumeric interaction will delay progress.

8.15.6 The $100K System

Because nobody would take us seriously enough, we decided to define a $100K system that you could buy today.

SYSTEM-X 1979
                                               $
1 Mbyte, 32-bit CPU                          50,000
1 300 Mbyte disc                             12,000
1 24-bit frame buffer                        30,000
1 optical videodisk                           3,000
1 data tablet                                   600
1 RGB Advent-type  projection                 4,000
8 terminals                                   4,000
SOUND equipment	                                  ?
1 Votrax-like voice equipment	              4,000
1 Dialogue Systems-like speech synthesiser   10,000
TOTAL                                       107,600
(-7.6 per cent Educational discount!)

8.15.7 General observations relevant to the Theme of Seillac

  1. The most important consideration is that of a way of thinking about the present in a such a fashion as to not preclude innovations. Such innovation will probably come from developments in consumer products (not office automation, CAD/CAM, etc.)
  2. As computers get cheaper, faster, and more memory plentiful, systems manufacturers will find that their only leading edge is at the interface, which must be at once transparent and ubiquitous.
  3. Multiple Media are not just the toys of the rich, but the instruments of communication, to say the same thing in many different ways for purposes of responding to individual cognitive styles and particular work situations. Toys like Speak video (Viewdata, Ceefax, intelligent videodiscs, etc.) have some very deep messages for us. Voice synthesis, colour displays, interactive movies will emerge extremely rapidly, as leisure systems.
  4. Parallelism and concurrency and redundancy characterise the human interface of the near future. While it would be foolish to expect everybody to have their own media room, we can expect terminals of 1981 to talk, have spatial sound, be full color frame buffer type displays; probably with: limited speech recognition, touch sensitive surfaces, flat panel displays.
  5. A consequence of point 4 is that the traditional cause and effect, ping pong model is an inappropriate, in fact counter productive model. Many inputs will be processed and cause outputs without ever reaching the deepest levels of a problem space. Similarly, concurrent outputs may be seeking a parallel processing by the user
  6. As interaction with computers becomes increasingly pleasurable (that is not a bad word) some interaction will transpire for its own sake.
  7. As interactive techniques increase in their expressive and receptive qualities computer models will need to become much less literal. Database designs cannot include data types which have an intrinsic mode or medium of representation, except in facsimile format, where the machine is not expected to understand the data itself.
  8. Individualisation is paramount. The personalisation of a terminal should be understood as deeply as possible, with the understanding that the real roadblocks are at the AI level of the machine's model of the user's model of him or her; that the convergence of the model with the basic model of the user can be considered the definition of acquaintance.
  9. The most valuable contribution that this workshop can make to the future of computers will be seemingly contradictory statements that a multiplicity of modes and media for interaction are already here and that the future of interactive techniques is something larger than what we call and perceive as computer graphics.

8.16 DISCUSSION

Shaw:
What are the 8 terminals for?
Negroponte:
People do still write programs, therefore terminals
Sancha:
Programmers are still lesser beings who are not allowed to use all the sophisticated equipment!
Newman:
You say displays with infinite memory are just around the corner, but, at the same time, say storage tubes are bad. Is that not inconsistent?
Negroponte:
But solid state displays can store images and you can read them back and that has applications.
Baecker:
We are not condemning storage tubes, as used in the past, but now, they are hindering the development of richer user interfaces.
Newman:
What will we learn from paying 100K + 7K tax? We are not an educational institution.
Negroponte:
You will not learn anything. There is a fiction which says SDMS is a collection of costly equipment. The point is that the cost of this equipment need not take 2% of the National Defence Budget of the USA as some people think.
Sancha:
You have spent a 100K and get good hardware. How much else do you need to program it? The gist of what you are saying is that the hardware is getting cheaper. Software is not going as fast. Did you discuss this?
Engelman:
Yes, you could produce systems in hardware for 100K but it costs 10 times this for software. But because the hardware is so cheap, people will try to use them and if you have the wrong interactive systems, it will make things work.
Negroponte:
We looked at hardware not software.
Baecker:
Within the time frame of 1979/81, for the price of a large screen, storage tube with keyboard, thumbwheel and joystick input, we can configure a display system with which we can draw thick lines, shade regions, draw texture, add colour, respond in sound, respond to voice and respond to a limited extent, to touch.
Even if body sensors will not be around until 1990, (and we may question their relevance), colour, texture, line-thickness, audio input/output are here now, are practical now and have been woefully unexplored even by Nick in the context of interactive dialogues. He has produced dazzling demonstrations but has not produced much hard scientific work or empirical work indicating what those potentialities really buy us. It seems to me those cost benefits are there but it is an open question. These new media should be part of our research mission in the future and should be kept as open possibilities.
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site