Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript □ 10. Position papers □ 10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDLiteratureBooksMethodology of Interaction
ACDLiteratureBooksMethodology of Interaction
ACL ACD C&A INF CCD CISD Archives
Further reading

ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript
10. Position papers
10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography

3. SYNTACTIC STRUCTURES

3.1 INTERACTION AND SYNTAX - P. J. W. TEN HAGEN

3.1.1 The Perspective

A potential user of an interactive system has to learn a new language before he can actually use that system. This language contains all the sentences that he may use and all sentences the system may utter to him. Once he knows the language, he has sufficient knowledge of the system to use all its facilities. Developing an interactive system, therefore, can be looked upon as designing and implementing an artificial language. Using an interactive system is communicating in the same language. The language model of an interactive system puts emphasis on the user interface. It also allows us to investigate the function of syntax where the term syntax, will be used for all aspects of the language that have to do with its form.

The major property of the language is its semantic content. Syntax, more precisely the syntactic structure, should be such that the semantics are well-expressed. This overall function of syntax implies that syntax can be used as a tool for a designer. We will try to characterize the current state of the art with respect to syntactic methods in three design areas:

  1. The overall structure of an interactive system.
  2. The device independence of an interactive system.
  3. The context conditions stemming from a dialogue, expressing the relationship between question and answer.

3.1.2 Top Down Design

A designer of a system always has choices between two extremes:

  1. He can implement the system as a set of tasks from a functional specification and add the command syntax as a last step, or
  2. He can describe the user interface first and next implement this language.

The second extreme seems to be more attractive since it takes, throughout the design phase, the user into account. However, it is likely to be impractical as details of a conversation will not be known before actual implementation.

It is believed that syntax can be used to find an acceptable compromise. It is possible to define the function of a system in terms of user concepts. This description, leaving out all details, can be and should be, equally well understandable by both user and designer.

It is possible to document this language skeleton in the form of the top of a syntax tree. This tree should be used as a basis for further refinement of the design as well as for user specification of the system. We then have the following picture:

user model specification for user/operator definition and implementation of the system

Before further refinement takes place, semantic assertions should be made which must not be violated in the refinement process. One very important assertion is that a user should have a good overall view of the system and the way it is organised. When operating the system, he needs to have navigational aids to tell him where he is and where he can go. It is a safe strategy to explicitly add the required syntactic constructs to give these navigational aids on a higher level of the design phase. When they turn out to be superfluous they can be easily removed. In general, it is impossible to add these facilities after the design phase.

A second assertion, which needs to be safeguarded during the refinement phase, concerns the basic concepts. They must remain recognizable. Although a parser may be able to recognise a basic construct, a user may not. At every level, the distinction between concepts should be defined before refinement. This again may add more syntax.

In most systems with a top-down, menu-driven organisation, examples can be found which violate these rules. Experience with programming languages shows that improvements in these areas make systems more effective and efficient. Everybody who has served as a program advisor (say, in a computing centre) knows that it is generally impossible to deduce a programmers mistake from his interpretation of what has happened. A system which allows for easy unambiguous interpretation gives him an easy job, and strongly reduces the number of programmers asking for help.

3.1.3 Device Independence

Device independence has a strong connection with program portability. The characteristics of portability are well known and have had in the recent past much attention. I will therefore emphasize an equally important aspect of device independence which has not received very much attention (and may be more difficult to solve).

The device independence I have in mind concerns the dialogue. The user is not aware of the physical devices he is using. Instead, he is fully concentrating on the communication. Using the same top-down model of the previous section, this requirement of device independence can be formulated as finding the lowest level of refinement. This level provides the user primitives. These primitives are to be understood not only as data types but also as operations.

Approximately, user primitives should be of the same complexity, as the operating instructions used to provide them. If one wants to implement a primitive on a given device one not only has to look at that primitive, but at other primitives as well. If all primitives are not equally easy to use, the user, will be forced into unnatural dialogues because he wants to avoid certain primitives.

Although syntax is by far an insufficient tool, it is essential because it must provide a complete layer of user primitives before a reasonable compromise between them can be found. The compromise tries to find, for a given set of primitives, a balanced implementation on the available resources. Only if that is impossible, will the primitives themselves be changed.

3.l.4 Context Conditions

So far the use of syntax as a tool has not made syntax more complicated. The concept of a top down design even postulates that (like in ordinary program design) the top of the syntax structure is simple.

Unlike top down design and device independence, context conditions are still an open problem. The semantics are unclear and hence it is not known what syntactical structure has to be chosen. By dialogue context problem we think of the following general problem. A dialogue proceeds in steps. Each step consisting of an answered question:

      S:  Q,A.

Both question and answer can be made up of steps. In each step a semantic relation may exist which we may wish to express by a corresponding syntactic structure. Examples of such relations are: synchronization, type - or value correspondence and feedback relations.

Current practice in programming languages is not to express context sensitivity through syntax. Context conditions are still in the realm of semantics. The reason for this is that, for non-trivial languages, the context-free syntax is already complicated enough. There is a clear distinction here between programming languages and dialogue languages. The latter have a much more basic context sensitivity. There are a few examples of programming languages (like ALGOL 68) which require context sensitive parsing. Almost all compiler writers for ALGOL 68 have tried to rearrange the syntax in such a way that it is context-free except for a finite number of well defined places.

Context-conditions occurring frequently in the language (for example, when a variable must be declared within the scope of its use) are never enforced by syntax. On the other hand, synchronisation requirements are explicitly stated in a number of proposed systems programming languages.

The message here is that syntax should remain manageable, especially when it has to serve as a design tool. In particular the requirements imposed on syntax in the previous two sections recommend that syntax remains simple. In spite of the fact that problems concerned with context conditions are very difficult, I think that no solution can be accepted which sacrifices either clarity or device independence.

There is no objection, of course, against putting context conditions into syntax, for the purpose of arriving at a better understanding. The context can be provided by the syntax in four forms (three of which are represented in the position papers):

  1. By state
  2. By synchronisation
  3. By hierarchy.

One not represented, is introducing the context by a two-level grammar. However, the other three suffice to illustrate the problems.

In the case of states, each question and answer can imply a state transition. These new states can then be affixed to certain steps, thereby restricting the number of possible steps.

In the case of synchronisation certain steps may be carried out in parallel, but certain questions have to precede certain answers as well as certain steps having to precede other steps.

In the case of hierarchy, a syntax rule may be parametrised or change the environment for subsequent rules upon return.

The first two forms are more powerful ones but lead very quickly to large syntaxes and inefficient parsers. The third is semantically much more modest, but it does produce a much smaller syntax.

An important point in all three methods is that they provide a mechanism to relate input and output. The state of the art is that, for lack of knowledge and methodology, input and output are treated as independent concepts.

3.1.5 Conclusion

No questions about the syntax of a certain language can be satisfactorily answered without taking semantics into account. We can now reconsider thinking about the basic semantics of interaction, knowing that the syntax is there to support it.

3.2 DISCUSSION

Shaw:
I would like to make five comments:
  1. In interactive systems, as opposed to programming languages, the relative importance of syntax is enhanced. This is because the user is faced not simply with an abstract syntax, but with one of many possible representations of it as a concrete syntax in terms of concrete devices.
  2. Some applications are (almost) pure syntax, for example, draughting and document preparation systems. In these, the semantics are the concern of the user; the system deals only with the syntax of the (to it) meaningless drawing or document.
  3. In interactive systems, there are normally two languages in use. The user talks to the system in an almost sequential language; the system replies in what appears to be a fairly parallel form.
  4. If syntax is a useful specification for a dialogue, it may also be natural to have syntax controlled dialogues. The syntax can be used to generate prompts, menus and even applications control structures.
  5. The position papers present several fundamentally new ways of specifying the syntax of graphical interactions, but none of them seem to have been used except for toy applications. They are not yet well developed, but they seem to indicate that the descriptive tools may not be as satisfactory as Paul supposed.
Kay:
There is a system where there has been more than 7000 hours of interactive use and that is the PLATO system [1]. They have found that most of the interaction grammar is involved with error handling, spelling corrections, back-tracking, etc. They have found that a non-hierarchical structure is essential. Traditional finite state grammars just cannot represent their syntax. It is important to consider PLATO as no other system has so much experience with real users.
van den Bos:
Nievergelt has used PLATO so I was surprised to see that his system was tree-structured.
Weydert:
Trees were used because the user found it difficult to find his way around the system in PLATO.
Kay:
We spent time building browsers because we could never remember where we were so you have to have some structure.
Sancha:
There is a relationship between devices and syntax. Our system maps all input entities onto a symbol stream - both text and graphical. The syntax analysers know nothing about the hardware. We find that 90% of any dialogue can be implemented so that we can move freely round a large number of devices. However, if you want to use the interactive capabilities of the device to the full, it may not be possible to map these onto the syntax. For example, some things you may want to do sequentially while others can be done in parallel. Ordering may be very important for some small part of the dialogue.
Baecker:
We need to study some small problems in detail and try them with different languages, different control structures and different hardware contexts. We need better ways to communicate what the user sees and does. There are no good formalisms at the moment.
Guedj:
We have had two sessions first on control structures and now on syntactic structures. The same expressions have been used in both but with different meanings. For example, the word dialogue has been used. Paul had a series of questions and answers. Is this a dialogue? At some point he mentioned interactive dialogue. What is a non-interactive dialogue?
ten Hagen:
In an interactive session, the main question and answer is what I considered the interactive dialogue while the sub-question and answers were the dialogue.
Guedj:
Context conditions were mentioned as giving the relationship between question and answer - are they in the realm of syntax, semantics or perhaps even pragmatics?
ten Hagen:
In the syntax, when you express context, you are only interested in a small number of possible answers. Consider a dialogue to draw a line. The syntax context condition will stop you drawing a circle in the middle of drawing a line. Thus, for example, it might separate out the active button set. This could be expressed explicitly in the syntax. However, it might make the syntax too complicated.
Guedj:
It seems that we use context conditions to describe difficult things.
ten Hagen:
Not necessarily so. Context conditions may be easy to describe.
Encarnacao:
You may have a system for digitising cable paths, different operators will digitise them in different ways. You need context conditions to describe these different approaches.
Kay:
There is a distinction between mode-full and mode-less interaction. A mode-full interaction is, for example, a text editor with an insert function which allows you to input text until a control character is hit to return you to command level. Many interactive systems are mode-full. A good example of a mode-less system is the GRAIL system at RAND. You never had to terminate one mode to get to another meaningful one. The system would clean up after you. SMALLTALK is also mode-less.
The distinction between the two is very important. The syntactic approach tends to give you a mode-full system. Experts like systems of this type but novices do not. They forget where they are and get trapped down somewhere in the system and cannot get out. Commands, that ought to work, do not. This is a difference between expert and novice. For the novice, a system that returns to level zero after every command is preferable.
Bono:
Reducing ambiguity is a key issue. Rules must tell you how to interpret the next response. In a mode-less system, there must be some way of ensuring that each interpretation is unambiguous. This could be done by knowing the user profile or by entering into a dialogue to reduce the ambiguity. Do we have to choose between mode-full and mode-less? We need to define systems that can appear as both mode-full and mode-less systems. Experts like mode-full while novices like mode-less. Can we categorise the user community? How do you indicate that a particular environment is mode-full or mode-less? Should you provide cues or signals when entering a particular phase?
van Dam:
We have found that structured dialogues are often easier for the novice. He has a fixed way of doing things. For example, type ahead is alright for experts but it is hell for novices.
ten Hagen:
Syntax describes choice - what you can say. It will allow many programs that don't make sense. You need to decide the borderlines where you either stop incorrect programs by syntax, semantics or not at all.
Sancha:
I do not like the terms mode-full and mode-less. I would prefer restrictive and permissive. We let our users access the system in a permissive manner. If an illegal command is encountered in the current mode, then a transition occurs to a state where the command is valid. If you allow these transitions, then a mode-full system can be highly permissive.
Shaw:
Interleaving of commands allows both these - see my position paper.

References

(1) D.Alpert and D.L.Bitzer, 'Advances in Computer-Based Education1, Science, 167, 1582-1590 (1970).

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site