Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript □ 10. Position papers □ 10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDLiteratureBooksMethodology of Interaction
ACDLiteratureBooksMethodology of Interaction
ACL ACD C&A INF CCD CISD Archives
Further reading

ForewordContentsPrefacePrologueAcknowledgementsParticipants1. Introduction2. Control Structures3. Syntactic Structures4. Cognitive psychology and interaction5. Visual Communication6. Presentations7. Working Groups8. Group Reports9. Postscript
10. Position papers
10.1 Anson10.2 Baecker10.3 Bo10.4 van den Bos10.5 Crestin10.6 Dunn10.7 Dzida10.8 Eckert10.9 Encarnacao10.10 Engelman10.11 Foley10.12 Guedj10.13 ten Hagen10.14 Hopgood10.15 Klint10.16 Krammer10.17 Moran10.18 Mudur10.19 Negroponte10.20 Newell10.21 Newman10.22 Nievergelt10.23 Ohsuga10.24 Rosenthal10.25 Sancha10.26 Shaw10.27 Tozzi11. Bibliography

10.1 The Semantics of Graphical Input

E Anson

45 Royal Crest Drive

North Andover, Massachusetts 01845

U.S.A.

Any graphical input device may be represented by a data structure, modified from time to time by actions in response to certain events, and the ability to cause certain events as part of its repertoire of actions. Portions of a device's state may be made visible to other devices in a controlled way, and the remainder hidden. Conversely, a device may make use of the visible portions of another device' s state. Typically, the pattern of device interaction forms a hierarchy, but no device is part of any other. This provides for the interchangeability of a single device with a group of devices, and allows a single device to support the function of several others. Device independence is thus enhanced without the usual sacrifice of human factors considerations.

A group of devices defined in this manner can simulate any group of devices defined in the usual manner. Conversely, useful groups of devices may be defined, which cannot conveniently be simulated by the usual input semantics. The proposed semantic is thus more complete, and provides the additional benefit of a uniform language for describing both physical and virtual devices.

INTRODUCTION

Hardware designers continue to be generous in their production of new devices for graphical input. The number and variety of devices now available provides a system designer with considerable freedom in adapting a human interface to its intended users. Unfortunately, this diversity of devices also tends to bewilder software designers. The difficulty arises from the lack of sufficient formalisms to bring order to the description and use of the devices (17).

One major aim of current research (16) is to develop the means to standardize graphics software so that applications may be made relatively independent of their hardware environment. At present, the ability to transport an application program from one device to another is purchased at a great price. That is, the ability to fully utilize specific device characteristics, to the benefit of the user, is lost. This paper introduces the semantics of action, an approach to describing input devices which allows the full utilization of all useful device characteristics and provides a high degree of hardware device independence.

Most of the subject matter is presented in three parts. The first discusses the semantics of graphical input devices. The second part shows how to create hierarchies of devices which provide a large measure of hardware independence. The third part applies these concepts to some typical problems, to demonstrate their completeness.

DEVICE CHARACTERISTICS

Recent discussion of device independent input has centered on the notion of virtual devices or tools as primitive elements. According to this approach, the application programmer deals with the small, fixed set of virtual devices (8) supported by the system software. Any physical devices with features not covered by this set are thus restricted in use; any features not provided by the hardware are simulated in a manner determined by the designer of the system software.

Problems with Virtual Devices as Primitives

One of the problems inherent in this approach may be illustrated by the following scenario: An application programmer designs a program which allows a user to position a picture on a 2D surface using the joy stick. Although the application only requires X and Y values from a joy stick, the terminal to be used has a 3D joy stick, which also produces a Z value. The presence of excess information is not a great concern, and there is an ergonomic preference for the joy stick. Therefore, the programmer selects the particular 3D locator (16,18) which corresponds to the joy stick, and simply ignores the superfluous Z value.

The program works fine, and everybody is happy until the need arises to use a different terminal. The program uses a device independent subroutine package (e.g., GPGS (5)), so it immediately runs with the new terminal. However, the new terminal has a 2D joy stick instead of 3D, and the 3D locator is simulated by requiring the user to type three values between zero and one. What was once a very use able program is now cumbersome at best and probably useless, even though it runs.

Possible alternatives to this scenario offer no improvement. The programmer could have originally selected the 2D locator called for by the application, but only at the expense of using a device less well suited to the manual task. Alternatively, one could modify the program by uniformly replacing 3D by 2D locator requests, but this process is tedious and error prone and certainly doesn't indicate much device independence. This sort of problem seems inevitable when using systems such as GPGS (5) or the Core system (16), since such systems provide no control over how a virtual device is simulated.

It has been suggested (8) that some means might be provided to assign alternative physical devices to any given primitive (virtual) device. However, no adequate formalism has been published, and none is proposed for the Core Graphics System (16). Indeed, none is likely to appear soon which will permit the full and flexible use of useful device characteristics within the existing framework.

One possible exception is the proposal by van den Bos (17) which would permit the 2D device needed in the above scenario to be simulated by the 3D device. Upon changing physical devices, the simulation could be replaced by the actual tool. But the problems of a fixed set of primitive devices remain. For instance, the dispute over which devices should be regarded as primitive continues. If the keyboard is made primitive as suggested in (16), individual key strokes are not useable. If the key is primitive as suggested in (17), those terminals which only transmit strings present a problem. The semantics of action defines primitives at a lower level, and permits the use of actual device characteristics in a device independent manner.

Useful Properties of Interaction Semantics

An acceptable language for describing and controlling graphical input will have the following semantic properties:

  1. Application programmers may exploit any useful characteristics of any physical input device. Some of those characteristics may be the motivation for choosing the device in the first place.
  2. Reassignment of devices (or terminals) involves, at most, a small amount of reprogramming. This should be true to the extent that the devices are similar. Dissimilar devices will inevitably provide different user interfaces.
  3. Any useful interaction may be described precisely, clearly and succinctly.
  4. Sufficient freedom exists to easily incorporate new devices, including those which provide feedback under application program control (9).
  5. The description of a device's behavior is independent of the implementation. In particular, no distinction of semantic significance is made between hardware devices and their software equivalents.
  6. The description of a device is independent of its application.

Three primitive elements, together with suitable composition rules, are required to satisfy these requirements. The remainder of this section describes those elements and shows how they are used to compose interactive devices.

Elements of Graphical Input Devices

Any graphical input device may be described using combinations of the following elements:

  1. State
  2. Event
  3. Action
State.
A device's state may be represented by the contents of data variables. Some variables in a device's state may be examined freely and at any time by other devices, and thus are called transparent (15). The rest remain hidden. Similarly, some variables in the state may be external. That is, they may actually be transparent components of another device's state. External state may also be transparent or hidden. The value of a transparent external variable may be defined (for convenience) as a function of the external state and possibly part of the internal state.
Examples of the objective meaning of state include: the position of a knob, joy stick or toggle, the contents of a keyboard string being composed, and the state (on/off) of the light on a function key. Note that this last example illustrates a form of feedback.
Event.
Events are discrete elements of communication between devices. Each event is caused by an action and may trigger an action in one or more other devices. It may also provide parameters to those actions. Examples of events include: pressing a function key, touching a light pen to a displayed element, and the completion of a time increment.
Action.
Actions occur in response to events, and may proceed concurrently with each other and with the invoking action. An action may receive parameters from its invoking event. These parameters, together with the device's state, determine the result of the action. Typically, an action will alter the internal state, signal an event and then terminate within a very short period of time.
Devices with internal state require a special initial action which is performed once upon creation of the device to initialize the internal state. In the examples given below, its parameters are expressed as parameters of the device. Other examples of actions include: turning a function key light on or off, and adding a character to a keyboard string being composed.

Describing Interactive Devices

The relevant features of any object (e.g., an interactive device) may be represented by a data structure, if those features are sufficiently understood. Hoare develops the notion that the axioms governing behavior of an object may be represented by analogous axioms governing behavior of a data structure in a program (10). Liskov has demonstrated that the behavior of a structure may be axiomatized entirely in terms of the values obtainable after sequences of operations (12). However, effective implementation in a programming language (13 ) apparently requires representation by data as well.

By analogy, a device could be described entirely in terms of the events it responds to (i.e., operations), its transparent state upon completion of any action (i.e., values obtainable) and the events caused by each action. Indeed, these properties constitute all that is known about a device outside its own definition. It is this independence of internal structure which provides the basis for device independence. The inclusion of algorithmic actions and hidden state is to make the description of a device's behavior sufficiently procedural to facilitate effective implementation on a computer.

To illustrate the use of these elements, the next few paragraphs describe some simple interactive devices. For compactness of presentation, a notation is used which resembles a programming language fragment. However, the notation is not complete and is not formally defined here. It should thus not be regarded as suitable for incorporation into a programming language.

Function Keys.

A function key is a device which responds to the press of a finger (an event), or its release (another event), by changing state from up to down or back up. As such, it consists of two actions, each of which alters a one-bit state. In addition, one or more of the following features are generally available:

  1. Position of key may be queried by software.
  2. Interrupts when pressed.
  3. Interrupts when released.
    Any non-empty combination of these options can be found in some function key. In addition, two additional features are often available:
  4. A light under software control.
  5. An identifying value (usually an integer) passed with each interrupt.

A completely general function key may be described thus:

            device FK (id: integer) =
state name: integer,  lighted: boolean;	             "hidden state"
transparent down: boolean;                "position of key is visible"
initial
            begin
name := id;	                                    "save identifier for interrupts" 
lighted := false;                                       "light is initially off" 
down := false;                                             "key is initially up"
end;
on finger-press begin
  down := true; 
  signa1 press(name);                          "interrupt when pressed"
end; 
on finger-release begin
  down := false;
  signal release (name);                       "interrupt when released"
end;
on set-light (value: boolean) begin
  lighted :- value;                                    "turn light on or off"
end
            end
            device
         

A much simpler function key is:

            device SFK =
on finger-press signal press;       "interrupt when pressed"
end
            device
         

Valuators.

A valuator is characterized by a transparent state which is a real value. Typically, this value varies between zero and one, and correlates with the angular position (external state) of some device such as a knob. In this form, it requires no driving events and produces none of its own:

            device valuator =
external  knob-angle: angle;
transparent  value = (knob-angle / 360° ): real;
end
            device
         

Keyboard Strings.

A keyboard string device causes a string event in response to a transmit event, which is typically caused by the return key on a keyboard. The string event passes a character string as its parameter. This device's state (hidden in most systems) consists of the string to be transmitted on request. A typical keyboard string device is:

            device keyboard-string	=
state Value:   string;
initial Value   :=   '';
on key-press   (Key-name:   char)   append  Key-name   to  Value;	"typing"
on back-space begin	                                        "error  correction" 
if Value =  ''
then
            skip
            else  delete   last   character   from Value
fi;
end;
on transmit begin
signal string (Value);                                             "pass completed string"
Value := '';                                                                "start the next string"
end
            end
            device
         

The reader has probably noticed that, as in the case of FK lights, no mention is made here of displaying relevant portions of the state as feedback to the user. This oversight could be corrected by introducing displayed as an attribute of state variables. However, this would raise questions which are outside the scope of this paper. The conclusions below provide some indication of the manner in which displayed state might be handled.

Clocks.

One sort of system clock causes an interrupt at regular time intervals.

            device system-clock (interval: integer) =
state time, period: integer; 
initial
            begin
    period := interval;                    "remember interval between ticks"
    time := period;                        "start count of first interval" 
end;
on time-quantum  begin 
     time := time - 1; 
    if time = 0 
    then
            signal tick;        "periodic interrupt"
          time := period
    fi;
end
            end
            device
         

The next section illustrates how devices described in this manner may be arranged into hierarchies which provide a good deal of device independence.

DEVICE HIERARCHIES

Interactive devices such as those described above are the basic building blocks of interactive systems. When properly designed, they satisfy the modularity criteria proposed by Parnas (14). Typically, they will be arranged into a hierarchical relationship to create a system. Such a system can be easy to use and highly adaptable to changes in hardware. This section discusses some of the essential concepts related to the design of hierarchical interactive systems, and provides examples.

Concepts of Device Hierarchies

To enhance modularity, to accurately describe hardware devices, and to provide maximum system adaptability, the semantics of action defines each device independently of others. That is, no device is part of another, and no device assumes the existence of any other specific device. Everything which affects a device from outside is mediated by events and transparent state. The linking between events and actions, and between external and transparent state variables, is specified separately and may be changed without modifying devices. It is this linkage which arranges the devices into a structure. No language is suggested here for specifying structures, but some aspects of the problem are considered. A full treatment of the topic would exceed the scope of this paper.

The name of an event or state variable in one device needn't correspond directly to names in any other device. Separately specified relations link device elements. The relation between transparent state variables and external variables is potentially one-to-many. The relation between events caused, and the actions which result, is potentially many-to-many. Of course, any transparent variable may remain unreferenced, and events need not always cause actions. In fact, it is not essential that actions be caused. Only those external variables which are used are required to be bound. This flexibility in binding permits the unneeded features of a very general device to remain unused. If such a device is implemented by an optimizing compiler, the superfluous features can be eliminated automatically to improve efficiency.

Organization of devices may be based on abstraction. The most abstract devices are thus at the top of the hierarchy; the least abstract are at the bottom. The choice of devices for each level affects the adaptability of the system, so a few words on this topic are appropriate. The most important consideration is transparency (15). Simply put, a device is transparent to the extent that it passes all information available to it which may be useful to other devices. For instance, the function key which allows its state to be examined is more transparent than one which does not. Similarly, the function key which provides an interrupt every time its state changes is more transparent than one which does not.

Generally, the best description of a physical device at the bottom of the hierarchy is determined by the useful features of the device. Basic support software should provide maximum transparency. At the other end of the hierarchy, the description of devices should accurately reflect the essential features of the application (1). If details of the human interface are omitted at the top level, lower levels may be changed to alter the dialog structure. Such a change may be required by a change of hardware, or by the discovery that the original interface design is inadequate. In either case it is likely that only a few modules will require modification or replacement.

Between the top and bottom levels, the change in abstraction from one level to the next should be fairly small. This tends to improve adaptability and increase transparency. Devices close to the lower end of the hierarchy should be general purpose in nature and kept available in a library for use in several applications.

Examples of Device Hierarchies

Making nested rotations convenient for the user can involve the use of an unbounded rotational device. Such a device is characterized by a transparent state which is an unbounded real value correlated to the rotation of a knob or similar object. Since the typical knob is bounded at 360° or less rotation, Britton (4) suggests the use of a clutch which allows the user to logically disengage the knob while returning it to a more convenient angle. Ideally, the clutch should be implemented as a momentary switch such as the general function key -described above.

Other modular approaches to high-order devices, such as in (17), would have difficulty expressing such a device hierarchy, but the semantics of action defines it simply as:

            device unbounded-valuator  =
state  clutch-engaged; boolean, bias: real;
external  bounded-value: real;
transparent  value = (if clutch-engaged
                               then  bounded-value + bias 
                               else  bias):   real;
initial
            begin 
    clutch-engaged   :=  true;
    bias := 0.0 - bounded-value;                           "value := 0"
end ;
on  clutch-disengage begin  
    clutch-engaged := false; 
    bias :- bounded-value + bias;                         "value will remain unchanged"
end ;
on  clutch-engage begin 
    clutch-engaged := true;
    bias := bias - bounded-value;                         "value will change"
end
            end
            device
         

The bounded-value in this device is linked to the transparent value of the valuator described earlier. The clutch-disengage and clutch-engage events are linked to the FK events press and release respectively.

Now let us suppose that a function key must be used which does not provide an interrupt when released. The key's transparent state may be used, with a system clock, to simulate the needed general function key thus:

            device GFK =
external
            transparent down: boolean;  "transparent state is passed on"
state was-down: boolean;
initia1 was-down := false;
on FK-press begin
            signal press;                            "press event is passed on"
    was-down :- true;                                  "remember need to signal release"
end;
on tick begin
            if was-down and not down                  "function key has been released"
then
            signal release;                  "synthesize release event"
was-down := false
fi;
end
            end
            device
         

On the other hand, a function key with only a press event and no transparent state requires a small change in the user interface if it is to help implement a clutch. For example, it may be converted to a lock switch thus:

            device lock-switch = 
transparent down: boolean; 
initial down := false;
on SFK-press begin
            if down 
   then
            signal release;
        down := false; 
    else
            signal press;
        down := true
    fi;
end
            end
            device
         

Note that none of these hardware changes requires any alteration of unbounded-valuator.

SOME TYPICAL PROBLEMS

To further illustrate the completeness and expressive power of the semantics of action, the next few paragraphs illustrate the manner in which it solves some typical problems. In particular, special attention is paid to issues involving human factors and device independence.

Implementing Device Independent Software

Implementation of a high level general purpose graphic system (such as GPGS (5)) typically involves a great deal of programming for each terminal (2). Each terminal must be provided with its own device dependent software to support the device independent system. Since those tools not present on a terminal must be simulated, a device driver is (ironically) more complex for the simpler terminals. Since no devices are considered primitives, a system based on the semantics of action reverses this situation. The only software that is necessarily device dependent is that which incorporates each physical device's useful features into the system. Device independent software provides any higher level features.

Applicability of these concepts to such a system is readily apparent. Indeed, a related methodology was used to implement the simulated tools for the Tektronix 4010 series (3). The concepts presented here are a refinement and extension of those developed to facilitate that implementation.

User Freedom

Another typical problem involves the freedom provided to the user. Consider an interactive application in which the following situation arises: A command string is to be entered by the user, including several parameters. However, the user doesn't immediately know the proper values for those parameters. Happily, they may be discovered (one at a time) by means of some other interaction with the system.

Several options are thus available to the designer of the application program. The simplest is to require that the user remember the parameters, or write them down, as each is discovered. A somewhat more friendly approach would be to provide a sub-dialog structure which allows the parameters to be entered one at a time. However, the latter approach suffers the dual disadvantages of requiring the designer to anticipate the need and of imposing greater complexity on both the programmer and the user.

A better solution is to allow the user to perform the other interaction (e.g., using function keys) in the midst of composing the command string. Implementation of this approach is usually simple on terminals which provide string buffers. However, anomalies may occur if an attempt is made to use the application on a different terminal. For instance, if the keyboard string device is simulated as suggested in (17) the needed freedom disappears. That is, once the system detects the beginning of a string being typed, it will ignore all attempts to do anything except complete (or cancel) the string. Other approaches, such as that suggested in (8), can provide the required freedom for the user but lack the ability to provide it in a modular, device independent manner.

On the other hand, the semantics of action provides both the freedom and the device independence. Due to the manner in which the keyboard string device is specified, it behaves the same regardless of whether it is a software simulation or a hardware device. In particular, no special provisions are required to provide overlap between its operation and that of other devices.

Equipment Shortages

Another typical problem involves the availability of equipment. Consider an application which provides its user with 24 distinct functions. Since none of these particular functions requires any explicit parameters, the designer naturally uses 24 of the 32 available function keys to invoke them. The application is then moved to an installation where only 16 function keys are available. The semantics of action suggests at least two reasonable adaptations, either of which may be implemented rather simply. Neither of these adaptations is tractable within the constraints of a traditional device independent system.

One adaptation is to assign alternate functions to some of the function keys, and to assign one key to specify the alternate function. That is, a sequence of two key presses may be required. A typical graphics system would require considerable modification of application program logic, but the semantics of action requires only the incorporation (from a library) of one software device to simulate the larger function keyboard. Another adaptation would allow command strings to be interpreted as replacing some of the needed keys. For instance, a device could be created which would simulate a function key press upon receiving a keyboard string containing the name of the function to be invoked. In the traditional device independent graphics environment, this approach would require substantial modification of the application program.

CONCLUSIONS

The semantics of action provides the basis of a meta-language for describing interactive languages and systems. Although it is not yet fully developed, it promises to provide the framework for a new class of interactive languages. Such languages will be more expressive, less dependent on hardware, and in general kinder to the end users. Separation of concerns is the principal advantage of this approach over others. It permits an application to be developed without great concern over the interactive devices to be used. This in turn reduces the impact of any change in the final choice of devices and dialog structure. Separation of the concerns of description, implementation and use of interactive devices permits devices to be treated as modular building blocks.

Furthermore, the manner in which devices (and thus systems) are described promises to improve verifiability of interactive systems. Non-interactive algorithms are, in principle, subject to formal proofs of correctness. Most notable of the proof methodologies is that introduced by Dijkstra (7), which identifies a program with a predicate transformer. It has the added advantage of simplifying the discovery of good algorithms. Unfortunately, the methodology is not applicable to interactive systems described in the conventional manner. On the other hand, the semantics of action is compatible in that each action component of a device may be identified with a predicate transformer, and the conditions under which it is invoked are easily demonstrated. Extension of the proof methodology to handle entire interactive systems is an objective of on-going study.

Implementation in software of systems described using the semantics of action should be straightforward and simple. Indeed, it should be more so than with traditional approaches. For instance, the notion of an action being caused by an interrupt is directly supported by most computing equipment, whereas the notion of waiting for one of a set of interrupts is not. It is worth noting that the semantics of action can be defined in terms of a restricted subset of Hoare's semantics of inter-process communication (11).

However, the semantics of action currently deals only with input to a static structure. It does not yet deal effectively with the problem of describing a complete, dynamic system. The remaining paragraphs discuss some extensions which are needed.

A simple means is needed by which the structure of a system may be specified and altered dynamically. Multiple independent copies of some devices are needed in most applications. Devices must be created, linked into their action structure, and then used and deleted. Some means is needed to specify groups of devices as modules, if the complexity of system architecture is to be controlled. Although the results are not yet complete, a simple extension of the notion of devices as data types appears to permit most of these issues to be resolved by suitable scope rules.

Extension of the semantics of input to a complete semantics of interaction is another objective of continuing research. It appears that display devices may be treated in a manner similar to input devices. Indeed, this appears to be necessary since some function keys display lights and since some graphic display devices support input via devices such as light pens. It seems simple to include display elements in the state of a device, and such an extension would provide a generalization of the loopless programming features of GRASS (6). However, the diversity of display technologies causes problems, if a useful degree of device independence is desired. Nevertheless, solutions are expected in the near future, permitting a complete semantics of interaction to emerge. I wish to thank Jan van den Bos, whose suggestion prompted me to reduce these ideas to writing, and whose constructive criticism of the original manuscript led to significant clarification. Thanks also to my wife, Gerry, for putting up with my long evenings of labor over the paper, and for providing moral support when I needed it most. My employer, Martin Marietta Aerospace, provided the word processing equipment I used to prepare this paper.

REFERENCES

1. Anson, E. Some aspects of the design of interactive data structures. Computer Graphics Group technical report, Nijmegen University, Nijmegen, The Netherlands, June 1976.

2. Anson, E. GPGS-370 device driver specifications. Computer Graphics Group technical report, Nijmegen University, Nijmegen, The Netherlands, July 1976.

3. Anson, E. GPGS-370 Tektronix driver installation. Computer Graphics Group technical report, Nijmegen University, Nijmegen, The Netherlands, July 1976.

4. Britton, E.G., Lipscomb, J.S., and Pique, M.E. Making nested rotations convenient for the user. Proc. SIGGRAPH '78 (Computer Graphics 12,3 (Aug. 1978), 222-227.

5. Caruthers, L.C., van den Bos, J., and van Dam, A. GPGS: A device-independent general purpose graphic system for stand-alone and satellite graphics. Proc. SIGGRAPH '77 (Computer Graphics) 11,2 (Summer 1977), 112-119.

6. DeFanti, T.A. Toward loopless interactive graphics programming. Proc. Conference on Computer Graphics, Pattern Recognition, and Data Structures, IEEE Catalog No. 75CH0981-1C, May 1975, pp. 352-355.

7. Dijkstra, E.W. A Discipline of Programming. Prentice-Hall, Englewood Cliffs, New Jersey, 1976.

8. Foley, J.D., and Wallace, V.L. The art of natural man-machine conversation. Proc. IEEE 62,4 (April 1974), 462-471.

9. Geyer, K.E., and Wilson, K.R. Computing with feeling. Proc. Conference on Computer Graphics, Pattern Recognition, and Data Structures, IEEE Catalog No. 75CH0981-1C, May 1975, pp. 343-349.

10. Hoare, C.A.R. Notes on data structuring. In Structured Programming, Dahl, O.-J., Dijkstra, E.W., and Hoare, C.A.R., Eds., Academic Press, New York, 1972, pp. 83-174.

11. Hoare, C.A.R. Communicating sequential processes. Comm. ACM 21,8 (Aug. 1978), 666-677.

12. Liskov, B.H., and Zilles, S.N. Specification techniques for data abstractions. IEEE Transactions on Software Engineering 1,1 (March 1975), 7-19.

13. Liskov, B., Snyder, A., Atkinson, R., and Schaffert, C. Abstraction mechanisms in CLU. Comm. ACM 20,8 (Aug. 1977), 564-576.

14. Parnas, D.L. On the criteria to be used in decomposing systems into modules. Comm. ACM 15,12 (Dec. 1972), 1053-1058.

15. Parnas, D.L., and Siewiorek, D.P. Use of the concept of transparency in the design of hierarchically structured systems. Comm. _ACM 18,7 (July 1975). 401-408.

16. SIGGRAPH-ACM GSPC. Status report of the Graphic Standards Planning Committee of ACM/SIGGRAPH. Computer Graphics 11,3 (Fall 1977).

17. Van den Bos, J. Definition and use of higher-level graphics input tools. Proc. SIGGRAPH '78 (Computer Graphics) 12,3 (Aug. 1978), 38-42.

18. Wallace, V.L. The semantics of graphic input devices. Proc. ACM Symposium on Graphic Languages (Computer Graphics) 10,1 (Spring 1976T, ^1-65.

⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site