The Working Group's discussion concentrated upon the following issues:
This group came up with the following nomenclature. Later in the Workshop nomenclature was re-examined as reported in Chapter 23.
A window is a composite object which the user perceives as an entity and with which the user interacts.
A panel is a lower level object from which windows are composed and with which the application interacts. Panels are often, but need not be. rectangular.
The window manager groups one or more panels into a window. Such panels are usually juxtaposed but need not necessarily be.
Panels grouped into a window are related hierarchically. They may, for instance, be moved as a group.
Output to a window is clipped by the window manager to a panel within it.
Input may be associated with a window or panel, which is then referred to as the listener.
Figure 20.1 summarizes the Working Group's view of a system architecture. Graphics in this context includes text. API denotes the Application Program Interface.
The following issues relate to this architecture.
The record of input events returned to the application should be at the same level as the output primitives accepted by the window manager. In particular, coordinate values should be relative to the panel in which the event occurred, and in the same world system used by the application to define the contents of the panel.
However, it was felt that the client (application) process should be able to control both the level of detail of reported events, and the method of reporting. Things which should be eligible to be reported as input events should include:
The last two imply the availability of an unencoded keyboard, which should be capable of being coded by the client process via a code table. (The difficulty of doing this in a device independent manner was recognized.)
The possibility of allowing the client process to download a procedure to be executed in response to a specific class of input events was discussed, and felt to be desirable in principle. However, more work was needed to establish the practicality in general of programmable window managers. The success of Jim Gosling's SunDew project would be an indicator, but it was felt that it would be fruitful to initiate a UK investigation into this issue. John Butler pointed out in discussion that in the Microsoft MS- Windows system an input event received by a client process could be sent back to the window manager for interpretation by one of a set of translation routines.
Although not strictly required by the GKS input model it was felt essential that the client process should be able to request notification by signal (as well as by an event record) of the arrival of a particular class of event. It was also felt essential that events should be timestamped, so that the client process could recognize such composite events as a double click on the mouse button.
Event records should contain identification of the panel in which they occurred. Events in title panels, scroll bars, etc would normally be interpreted by the window manager, which by default has control of these panels, but could be passed back to the client if the window manager is requested to do so.
The group felt it was essential that the window manager should make no assumptions about the locus of control, and should be capable of supporting reactive (user-in-control), active (application-in-control), and mixed initiative modes of operation.
The division of responsibilities between the window manager and application program should be such as to minimize their mutual awareness. With presently available hardware, the rearrangement of the screen may require that data already output by an application program to its window be repeated either wholly or in part. The application program may have had no part in causing the screen rearrangement and have no obvious incentive therefore to redraw anything. If it is to retain unawareness of what the window manager has just done then a redrawing ought to be undertaken by the window manager. There are some special cases however:
As hardware costs diminish and display processor designs evolve, the need to tolerate the compromise of special cases reduces and the ideas of no redraw by the application program as a response to screen rearrangement will be achieved.
The input model calls for separately identified input channels associated with each window. These are most easily mapped into the operating system's input-output structure as named inter process communication channels or ports. If the window manager is to be portable or extensible, a similar method will also be required for communication of output and control messages from the application programs.
The division of responsibility for the graphical processing required to generate the contents of a panel is difficult. The range of options is widened if the operating system enables application programs to share procedural information, since then at least the memory and swapping overheads associated with locating the majority of graphical processing in the application process is minimized. (The current Sun Windows system requires 200-300Kbytes of library code to be loaded into each application process because sharing of libraries is impossible in BSD 4.2.)
There is assumed to be at least one pointing device, with at least one button on it. Other interactive devices may be needed, eg potentiometers, multiple pointing devices, 3D input. All interactive input should be routed by the window manager to applications (via the communication ports associated with panels). The integration of such devices into Unix operating systems normally requires the installation of new driver code. It should be possible to control the driver in all of the ways implied by the input model, and preferably, to load dynamically that part of the driver that transforms mouse and keyboard input to event sequences.
Portability of the Window Manager is much enhanced if the hardware places the frame buffer directly in the address space of the main memory, since the window manager can then be a user-level process.