Initiated by a Graphics report the author read A, Feiningers' s book The key to today' s photography. In the prefix of that book, A. Feininger explains the reason why he writes that book and what he is aiming for. The really surprising point, however is the set of definitions around which he centers his book:
The object: is the reason for taking a picture and consequently the most important component of the picture taking process. All other factors like photographic techniques and the arrangement around the object are irrelevant for the meaning and value of the picture.
The photographer: A photo develops by choosing an object and by its interpretation and treatment through the photographer. The object is the given constant, interpretation and treatment are the variables.
The picture: Serves for a special purpose - i. e. communication and dialogue with others -but is never a purpose in itself.
In these definitions A. Feininger used the means of abstraction to express something very complicated (photography) in a few words. He did not explain how a camera works, what the difference is between the different types of cameras and he did not say a word about that awful bunch of extras photographers normally use. In other words, he specified the word photography. In [9] the authors write:
a fair amount of confusion has been caused by the fact that the word specification is used with two distinct meanings in the computer literature. The dictionary definition of the word specification covers any communication that provides additional information about the object being specified - any communication that makes the description of the object more specific. In the usual engineering usage the word has a much narrower meaning. The specification is a precise statement of the requirements that a product must satisfy.
In this paper I will use both specification techniques to discuss some problems one encounters designing graphics systems.
The object to be specified is called a module. Let's treat this module like a black box and what is important let's describe its relation to the external world. The interface between this module and the outside world is defined by a certain amount of access functions. The user of that module (which can also be a module) does not know anything about the implementation of the black box. D. Parnas introduced two types of functions, defining the interface
V-functions return values that give information about the data stored within the module. The execution of an O-function normally change a set of internal module data, and in most cases an O-function will eventually cause a change in the value of V-functions. The specification of a module does not refer to internal data of the modules. The internal data are not part of the requirements, they are part of the solution. A specification makes use of abstraction by defining characteristic V- and O-functions and by describing their influence to each other, or in other words a specification determines the behaviour visible externally of a module. In doing so the V-functions perform as indicators of the visible effect. It is important to distinguish two different values of a V-function. The V-function's value before an execution takes place and the value of the V-function after an O-function affected the module.
Intentionally the visible effect of a module is an abstraction of a module's properties. In our case the visible effect is even physiologically real, which leads to the idea that the graphics system as a whole is considered to be a module. This module's user is going to be a human user; the picture is the effect visible to the user on the screen. Let's interpret the set of graphic calls which are at the user's disposition to be the O-functions or operations. Still undefined is the total set of V-functions. The only thing we know is that all these V-functions together form the visible effect for the outside world.
convention
Figure 1 shows the module as a black box possibly affected by a set of operations FOi. The operate conditions - these allows the O-functions to affect the module - are part of the specification. To answer the question: what are V-functions or states? You better ask: what does the user see?
The user sees (on the viewing surface)
The total visible effect is a picture created by a beam/pen moving on the viewing surface using at least two device
states (pen up, pen down). The user will notice, that the picture is drawn element by element with the speed varying from device to device.
The user may furthermore think: the final product of this graphic device is a picture similar to a photo.
Consequently he might think the (internal mechanism) software of this graphic device will probably work in a way similar to a camera.
Having that idea in mind the user is a victim of a heavy underestimation of a camera's possibilities and vice versa of an
overestimation of a graphic system's capabilities. What is the difference?
A photographer notices an object, which is existing or he notices some already existing objects. From these single parts he forms a model for his purpose. After that he grasps his camera, sets the viewing parameter and takes the picture. If he is lucky enough to own a polaroid camera he will after a short time get a picture, which is an instance of the model he formed previously.
The user: of a graphic output systems has to start by defining an object existing may be only in his mind. That means he is forced to first write the following program in analogy to the photo taking process.
Create (Object) (Primitive)n 1 Close (Object) Set Viewing Parameters Draw (Object)
He will never succeed, however, to write this program, because there does not exist a user manual (of a so called General Purpose, Device-Independent ... Graphics System) where he could find the order: Draw Object. The program the user has to write instead is of the following:
Set Viewing Parameter Create (Object) (Primitives)n 1 Close (Object)
The graphics system is doing something different from what the user wanted to do. The graphics system starts drawing the picture (at runtime) without waiting for the command Draw Object. In other words: The definition of the object and the picture presentation process are combined and executed simultaneously. The principle of this picture generating process is simply the simulation of a draftsman who is working using a sheet of paper, a pencil, a straightedge and several character- and marker stencil-plates. Today's graphic system's output functions truly represent this draftsman's tools and actions. The order MOVTO corresponds to the movement of the draftsman's hand, who directs the pencil to the starting point of a geometric primitive e.g. a polygon without even touching the paper. The position of the pencil is equal to the Current Position of the graphic system. The order LINTO describes the drawing of a line with the pencil moving on the paper's surface following the edge of a ruler. It's obvious which analogies apply to perform the TEXT and MARKER-primitives.
The same conceptual model is used for the graphic input operations by defining a logical action of the draftsman (e.g. pointing to a displayed element) to be an input primitive (which is unfortunately called a logic device). That concept seems to inherit a strong device dependent component. The output primitives are almost identical to the machine instruction of a line generating device. This is the main reason why the Current Position and the MOVTO requirement has been completely deleted when e.g. the DIN-GKS-system [5],[6] has been redefined. The GKS-system contains some more components to perform the principle of separating object description and picture representation. Thus, to describe objects pen numbers are employed; corresponding pen attributes are not assigned until the view of the object is drawn on the current workstation.
The specification technique mentioned above was used as a means of communication in the design and implementation phase of a pilot implementation of a graphic system which contains elements of the GKS-System [6] as well as of the SIGGRAPH-Core proposal [4]. In the following I will describe the concept of the input module of this system.
The input system's layout is based by three keywords: workstation, logic input interface and symmetry.
A workstation is formed by any set of output and input devices supported in a standard way by a low level software. The workstation features following fundamental capabilities:
The logic input interface is completely independent from physical input devices and interactive techniques. Among others the following two capabilities distinguish the logic input interface:
Symmetry is a useful principle to enable a user to operate the interface in a more natural way. This principle may be used from different points of view:
The design of the logic input interface was guided by the fact, that there exist a difference between man-machine communication and graphic input. Man machine communication as described in [l] is a short loop between a displayed picture (image) and a human user. Graphic input on the other hand is a long loop between the application program (a problem solving system) and a human user. Interactive techniques, real time facilities and a bunch of low level software characterize the short loop. The human user closes this short loop in communicating with the machine performing actions with devices. The user's action or to say it more precisely the meaning of the user's action that is the topic to be reflected in a graphic language (this includes a package implementation). The input device, that is the pencil in the draftsman's hand is an necessary evil, a device dependent component based on today's technology. The physical input device is not necessarily existing in reality by example if the user's action is performed directly by the user*'s voice or by directly touching a touch- sensitive view surface. Keeping this in mind it is not logical to define an input system on the basis of devices.
The long loop between an application program and an operator uses the data generated by interactive techniques to solve problems that cannot be handled by the short loop. The data generated by actions and low level software should be in the same way independent from the technique which may be used to generate these data. If by example a numerical value must be transferred to the application program, then the corresponding data item of this transfer should simply contain a numerical value instead of a concatenation of low level data resulting from an interactive technique (e.g. light handle technique or potentiometer technique). Referring to our previous example the execution of the short loop results in a value-instrument displayed on the screen and enables an operator to define a numerical value in a very elegant way by directing the lightpen to this value-instrument. Pressing the lightpen switch terminates the process of defining a value through the user. The effect visible to the application program after this activity is a numerical value. After all it is not logical to treat a physical button (on a device) as a logical device. The user's intention in the example above was to define a value, it was not his intention to press the lightpen switch. (The latter can be treated as an internal select device or logical hardware).
The principle of symmetry was first published [to the author's knowledge] in [7]. One of this paper's statement claims that there is a certain logic in treating input and output types in an equivalent way. In the following this principle will be applied to the output functions available in the GKS-System [6], which are
The description of an object is performed by a sequence of commands with each command defined by a type identifier followed by a sequence of positions and/or a sequence of characters. With this in mind the input types are defined as:
User defined objects are created for the purpose of being manipulated by interactive actions. Numerical values are ubiquitous in the process of object description. The definition of the input types mentioned above is guided by the argument that the input data is the most essential component of information to be transferred to the next higher layer of software. Sequences of positions, of characters and of numerical values are the unit to build-any geometric input type. As potential geometric input type candidates [7] mentioned: Position, Vector string, Freehand line and Text following a starting position. There is always a possibility to compose each of these and of any other geometric input type-report using a report identifier, a sequence of positions and characters. It seems preferable too to work with a sequence of identifiers of user defined objects and use them as a compound event report. That is the case by example if a set of segments have to be handled interactively in one manipulation process (e.g. in deleting a set of segments). Another important criterion of every graphic input is the direct visual feedback (echo, to the operator. This task has to be completed by the workstation's low level software (hardware even better). An important requirement which in the past has been neglected very often is to give a fault correcting opportunity to the operator. As an example a set of user defined objects which have been picked already should stay in a mode which allows editing before the user initiates the transfer of the finalized sequence. The correction scheme should apply for all types of basic input data.
Figure 2 shows that the interface to the next higher software layer is characterized by five data gates with storage capability. Event data is what have to be stored after their generation from logical input sources. There may be any number of logical input sources, they all can be categorized into five classes. Every workstation is expected to house at least one of each class of logical sources. The five categories are:
Each logical input source owns one register showing its current state. The logical input sources operate independently from each other but there may several sources be working at the same time. To turn off (disable) one source has no influence to data generated already by this source or any other.
The input module implemented up to now as a stand alone version can be described by the following operations and value delivering functions. These are:
These functions are a subset of the logic input interface and will be specified in detail in Appendix 1. More details of the concept and the respective implementation one can find in [8].
The smallest unit of the logical source administration is defined by the pair (category, unit) under the assumption discussed above. Mapping between logical input sources and physical input devices joins the configuration existing in reality and the logic input interface. The task is to definitively determine the way how to transfer and transform the data generated by physical devices to a pair {category, unit). Obviously it is not sufficient to assign a physical input device to a logic input source and vice versa because there is made no statement on the nature of the data transformation. The use of the pair valuator and lightpen e.g. is insufficient because it does not indicate whether a light-handle or a potentiometer is employed. On the other hand a potentiometer e.g. not necessarily needs to be realized by a lightpen. Fig. 3 shows that the mapping is performed by a two level process.
Furthermore fig. 3 shows:
What ever is considered to be a technique or a physical device has to remain an object of the implementation process. This also means that a combination of hardware devices should be treated as one physical device. Even though the misinterpretation of a select device and a choice category is widely used this conceptual approach on purpose relinquishes the user access to the select device.
The mapping process described in fig. 3 does not yet sufficiently determine the way from physical devices to logical sources. Since one physical device may be used to realize several techniques the run time data have to be taken to find an unambiguous way from physical devices to one category. All this problems must be solved by the respective implementation but they don't need to be standardized. On the other hand the logical input interface should contain the assign - statement and may be even the use - statement to support a graphics system's requirement which is called the human factor.
The intention of this paper was to show that a strong device oriented component is inherent in today's graphics systems. An other statement of this paper is that graphic input and man machine communication are not necessarily identical. The logic input interface discussed in some detail was guided by the concept of [2], but this concept was interpreted from another point of view. The functional description of the logic input interface defined in Appendix 1 represents a subset affected with some ad hoc tricks. The specification technique in [9] may be used to specify a concept in a more adequate way.
The author appreciates numerous critical discussions he has with H. Wente [8].
[l] J. D. Foley, V. L. Wallace 'The Art of Natural Graphic Man Machine Conversation' Proceedings of the IEEE, Vol 62, No. 4, April 1974.
[2] V. L. Wallace "The Semantic of Graphic Input Devices" Computer Graphics, Vol. 10, No. 1, Spring 1976.
R. F. Sproul, E. L. Thomas "A Network Graphics Protocol " (August 1974).
[4] "Status Report of the Graphics Standards Planning Commitee of ACM / SIGGRAPH", Computer Graphics, Vol. 11, No. 3
[5] R. Eckert, J, F. Prester, J. G. Schlechtendahl, P. Wiskirchen "Functional Description of the Graphical Core System GKS as a Step towards Standardisation" Informatik - Fachberichte, Vol. 11 Methoden der Informatik fur Rechner-unterstiitztes Entwerfen und Konstruieren GI - Fachtagung Munchen 1977
[6] "Proposal of Standard DIN 0066252"Information Processing Graphical Kernel System (GKS)" (Dec, 1978).
[7] G. F. P. Deecker and J. P. Penny "Standard Input Forms for Interactive Computer Graphics " , Computer Graphics, Vol. 11, No. 1, Spring 1977
[8] H. Wente "Entwurf und Implementierung eines gerateunabhangigen Eingabesystems" Diploma thesis T.H. Darmstadt (May 78) (in German)
[9]W, Bartussek and D. L. Parnas "Using traces to write abstract specifications for software modules" Department of Computer Science, University of North Carolina at Chapel Hill. UNC Report No. TR 77 - 012 (Dec, 1977)
In the following raw specification the keywords state and operation are used instead of V-function and O-function. The intention of this strategy is to demonstrate that Appendix 1 is very close to a specification in the dictionary meaning.
The state await shows that the next higher software level uses the logic input interface in a synchronous way, that means that there exists a polling loop between the two layers of software. The await state delivers the category of the event occurring next.
Note: There is no side effect with respect to any data gate.
A 2 defines the data type of each category's sample register.
A 3, A 4 define the data types possibly stored in each data gate. Events is a state showing the number of events stored in a data gate.
Note: There exists only a one-level naming mechanism. The locator category delivers normalized coordinates, the string category is able to store a sequence of strings.
state await (time) co wait for event occurring next co param: int time effect: none init value: undefined poss values : ref device, nil Properties of the await state: (1) ∨t∈T ∨l∈L events(l)t > events(l)t0 ⇒ await(time) = 1 (2) ∧t∈T ∧l∈L events(l)t = events(l)t0 ⇒ await(time) = nil with T := [t, t + time]; t0 : await-call time L:= {pick source, locator source, choice source, valuator source, string source)
state sample (unit) co sampling of choice co param: int unit effect: if ¬ 0 < unit ≤ nunits (choice) then errorcall fi init value: undefined poss values: int choice-id, nil
state sample (unit) co sampling of string co param: int unit effect: if ¬ 0 < unit ≤ nunits (string) then errorcall fi init value: undefined poss values: (int string-length, int cursor)
state sample (unit) co sampling of locator co param: int unit effect: if ¬ 0 < unit ≤ nunits (locator) then errorcall fi init value: undefined poss values: (real x, real y, bool select)
state sample (unit) co sampling of pick co param: int unit effect: if ¬ 0 < unit ≤ nunits (pick) then errorcall fi init value: undefined poss values: (ref user object userobj, bool select), nil
state sample (unit) co sampling of valuator co param: int unit effect: if ¬ 0 < unit ≤ nunits (valuator) then errorcall fi init value: undefined poss values: (real z, bool select)
sampling properties: If a unit is disabled, then the corresponding sample value is undefined.
operation takec co take next choice event co param: none effect: if events(choice) > 0 then events(choice) := events(choice) - 1 fi init value: undefined poss values: (int unit, int choice-id)
operation takes co take next string event co param: none effect: if events(string) > 0 then events(string) := events(string) - 1 fi init value: undefined poss values: (int unit, int string-length, string string)
operation takel co take next locator event co param: none effect: if events(locator) > 0 then events(locator) := events(locator) - 1 fi init value: undefined poss values: (int unit, real x, real y)
operation takep co take next pick event co param: none effect: if events(pick) > 0 then events(pick) := events(pick) - 1 fi init value: undefined poss values: (int unit, ref user object useroobj)A4
operation takev co take next valuator event co param: none effect: if events(valuator) > 0 then events(valuator) := events(valuator) - 1 fi init value: undefined poss values: (int unit, real z)
General take properties: If a data gate is empty (events(cat) = 0) then the result is undefined.
state events(cat) co get number of entries in a data gate co param: ref category cat effect: if cat ∉ L then errorcall fi init value: 0 poss values: int
operation clear(cat) co clear data gate(s) co param: ref category cat effect: if cat ∉ L then errorcall elsf cat = all then ∧l∈L events(l) := 0 else events(cat) := 0 fi values: none
The following two operations contain some states not defined in this subset. These are;
operation assign(cat,unit,techn) params: ref category cat; ref technique techn; inf unit; effect: if cat ∉ L then errorcall elsf ¬ 0 < unit ≤ nunits(cat) then errorcall elsf ¬ implemented(cat,techn) then errorcall else if opstate(cat,unit) = active then opstate(cat, unit) = inactive fi; technique(cat,unit) := techn; hardware(cat,unit) := default for this technique; fi co assign technique 'techn' to (cat,unit) co
operation use(cat,unit,hdev) params: ref category cat; ref hardware hdev; inf unit; effect: if cat ∉ L then errorcall elsf ¬ 0 <; unit ≤ nunits(cat) then errorcall elsf opstate(cat,unit) = active then errorcall elsf technique(cat,unit) = undefined then errorcall elsf cat ∉ allowed(technique(cat,unit),hdev) then errorcall else hardware(cat,unit) := hdev fi co use hardware 'hdev' to perform the technique assigned previously co