# Jump Over Left Menu

### 7.1 INTRODUCTION

#### 7.1.1 Relationship to 2D

It is now time to look at the 3D side of PHIGS. The emphasis so far has been on the 2D side partly because some of the features (for example, structures) can equally well be explained in 2D and 3D (with 2D being simpler) but also because PHIGS is an effective standard or system for structured 2D graphics and for applications where the control of output to the workstation is greater than is possible in, say, GKS.

PHIGS is a 3D system. While structure elements can be defined in the 2D form and will be stored as such, traversal in PHIGS only generates 3D primitives. The output primitives described so far are really shorthand notations for the equivalent 3D primitive. Thus, traversing the structure element:

```      POLYLINE(5, XA, YA)
```

creates the same output primitive as:

```      POLYLINE 3(5, XA, YA, ZA)
```

if ZA(l) to ZA(5) are all equal to 0. In this Chapter, the more complex viewing model for 3D graphics is described. Hopefully, the introduction to viewing via the 2D description in Chapter 6 should make it easier to understand.

#### 7.1.2 Coordinate systems

For 2D graphics, most systems use a coordinate system where the positive X-direction is to the right, and the positive Y-direction is upwards. This convention is generally accepted apart from one or two exceptions mainly related to hardware systems where the image is generated from the top downwards and it is consequently more convenient to have the positive Y-direction being downwards.

In 3 dimensions, the situation is less clear. If positive X increases to the right and positive Y increases upwards, does positive Z increase as you come out of the paper or go into it? Figure 7.1 illustrates the situation where positive Z increases as you come out of the paper. This is called a right-handed coordinate system. Holding up the right hand with the thumb as the X-direction, the first finger as the Y-direction, then the second finger defines the positive Z-direction and this has to point towards you. The opposite is the case for the left-handed coordinate system.

##### Figure 7.1: Right-handed coordinate system

All coordinate systems used by PHIGS are right-handed coordinate systems. This is different from some current practice where a mixture of left and right-handed coordinate systems are used. Consequently, care should be taken when looking at some standard texts. A drawback with right-handed coordinate systems is that the Z=0 plane of the NPC unit cube is the back face. Thus the 2D functions all generate output on the back rather than front face which may be less intuitive in some cases.

Coordinates used by PHIGS in 3 dimensions are normally expressed in a homogeneous form as (X,Y,Z,1). Consequently, transformations in 3D are 4 × 4 homogeneous matrices. For example, on traversal, the coordinates of a point in modelling coordinates (MCX,MCY,MCZ) are transformed by the current modelling transformation C to give a point in world coordinates (WX,WY,WZ) as follows:

where WX=WCX/WCW, WY=WCY/WCW and WZ=WCZ/WCW.

#### 7.1.3 3D functions

Most of the 3D functions are a simple extension of the equivalent 2D functions. The rule is to add the number 3 to the name of the function, add an additional parameter for the Z-component where a position has been defined by X and Y parameters, extend arrays of 4 elements representing limits of an area to 6 elements representing the limits of a volume, and replace 3 × 3 matrices by 4 × 4. Some examples of the pairs of functions are shown below:

```POLYLINE  (N, XA, YA)
POLYLINE 3(N, XA, YA, ZA)

SET LOCAL TRANSFORMATION  (MT2, TYPE)
SET LOCAL TRANSFORMATION 3(MT3, TYPE)

SET GLOBAL TRANSFORMATION  (MT2)
SET GLOBAL TRANSFORMATION 3(MT3)

SET MODELLING CLIPPING VOLUME  (OP, N, HLFSPA)
SET MODELLING CLIPPING VOLUME 3(OP, N, HLFSPA)
```

MT2 is a 3 × 3 matrix while MT3 is a 4 × 4 matrix. MT2 is a shorthand for MT3. The relationship is:

In 3D, the modelling clipping volume requires a parameter HLFSPA(6) which has values:

```      HLFSPA(1,I)=X
HLFSPA(2,I)=Y
HLFSPA(3,I)=Z
HLFSPA(4,1)=DX
HLFSPA(5,I)=DY
HLFSPA(6,I)=DZ
```

The point on the boundary is (X,Y,Z) and the normal is from this point to (X=DX, Y+DY, Z+DZ).

#### 7.1.4 Building transformation matrices

The 2D version of BUILD TRANS FORMATION MATRIX (see Section 2.8) is quite simple defining a fixed point and the scale, rotate and shift to apply. In 3D, the situation is more complex as the rotation can be about three axes:

```BUILD TRANSFORMATION MATRIX 3 (XF, YF, ZF, DX, DY, DZ, PHX, PRY, PRZ,
SX, SY, SZ, ER, MT3)
```

The function builds the 4 × 4 homogeneous matrix, MT3, to be returned to the application. The parameter ER is set to 0 if a matrix has been built successfully, or to a non-zero value otherwise. The transformation built is a mixture of scaling, rotation and shifting. The parameters (XF, YF, ZF) define a fixed point to be used as an origin for scaling and rotation. The parameters (DX, DY, DZ) define the translation to be applied. PHX defines the anti-clockwise rotation in radians to be applied about the X-axis through the fixed point. Similarly, PRY and PRZ define rotations about the Y and Z-axes. The parameters (SX, SY, SZ) scale the coordinates about the fixed point. If all transformations are defined with non-identity transformations, the operations are performed in the order scale, rotate, and shift. The rotations are performed in the order rotate X, rotate Y and rotate Z. The utility functions defined in Section 5.6 in 2D have 3D equivalents:

```TRANSLATE 3(DX, DY, DZ, ER, MT3)
SCALE 3(SX, SY, SZ, ER, MT3)
ROTATE X(TH, ER, MT3)
ROTATE Y(TH, ER, MT3)
ROTATE Z(TH, ER, MT3)
COMPOSE MATRIX 3(MTA3, MTB3, ER, MT3)
COMPOSE TRANSFORMATION MATRIX 3(MTA3, XF, YF, ZF, DX, DY, DZ,
PHX, PRY, PRZ, SX, SY, SZ, ER, MT3)
```

The meanings of these functions are, in general, obvious assuming the 2D function is known. The three ROTATE functions specify an anti-clockwise rotation about the specified axis.

### 7.2 VIEWING

#### 7.2.1 Viewing pipeline

The viewing pipeline is shown in Figure 7.2. On structure traversal, output primitives are defined with their coordinates defined in world coordinates. This is the WC Scene. As for 2D, an intermediate coordinate system is defined called View Reference Coordinates (VRC). This coordinate system is a reorientation of the World Coordinate system to one more appropriate for viewing. View mapping is then applied to produce a picture in Normalized Projection Coordinates (NPC). It is the view mapping operation that is quite different from the 2D case. The scene is effectively projected onto a 2D plane. The device independent picture in NPC coordinates has to be placed on the display of the workstation at the appropriate position possibly with local workstation clipping. That final stage is also shown in Figure 7.2.

##### Figure 7.2: Coordinate transformations in PHIGS

In this Chapter, the description will concentrate on the view orientation and mapping. View orientation and mapping are the same for both PHIGS and GKS-3D. The model is sufficiently comprehensive to meet most requirements.

#### 7.2.2 View example

To illustrate viewing in 3D, the 2D desk as the standard example will be replaced by the four letters L, E, F and T.

##### Figure 7.3: Position of LEFT relative to WC axes

The world coordinate space in which they sit is centred around the point (50,50,50). The letters are 20 units high, 16 units across and 6 units deep. They are spaced 20 units apart. Consequently, they extend from 10 units to 90 units in the X-direction, 40 to 60 units in the Y-direction and 47 to 53 units in the Z-direction. The data values defining the front faces are:

```      DATA XL /10, 10, 14, 14, 26, 26, 10 /
DATA YL /40, 60, 60, 44, 44, 40, 40/
DATA ZL /53, 53, 53, 53, 53, 53, 53 /

DATA XE /30, 30, 46, 46, 34, 34, 42, 42, 34, 34, 46, 46, 30/
DATA YE /40, 60, 60, 56, 56, 52, 52, 48, 48, 44, 44, 40, 40/
DATA ZE /53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53/

DATA XF /50, 50, 66, 66, 54, 54, 62, 62, 54, 54, 50/
DATA YF /40, 60, 60, 56, 56, 52, 52, 48, 48, 40, 40/
DATA ZF /53, 53, 53, 53, 53, 53, 53, 53, 53, 53, 53/

DATA XT /76, 76, 70, 70, 86, 86, 80, 80, 76 /
DATA YT /40, 56, 56, 60, 60, 56, 56, 40, 40/
DATA ZT /53, 53, 53, 53, 53, 53, 53, 53, 53/
```

A view of the world coordinate scene with the faces of the letters defined as fill areas is shown in Figure 7.3. The top face is made solid to improve the comprehension. At some angles of projection, the wire frame image can be ambiguous without this.

#### 7.2.3 Viewing model

As the 3D scene has to be represented on a 2D display, the mapping from 3D to 2D has to be chosen to either give a general overall impression of the scene or to ensure that certain aspects of the scene are retained in the 2D picture, maybe at the loss of information in other areas.

To produce the 2D picture from the 3D scene, each point of an object in the scene must be mapped onto a plane. The way this mapping is accomplished differentiates the type of projection chosen. The projections used in PHIGS are the standard planar geometric projections. Such a projection of an object is generated by passing lines called projectors through each point of the object and finding where these projectors intersect with a plane called the view plane.

##### Figure 7.4: PHIGS viewing model

The two main types of projection are perspective and parallel. A perspective projection has all the projectors starting from a single point called the Projection Reference Point. A parallel projection has all the projectors parallel, effectively they all start from a point at infinity.

The PHIGS viewing model is shown in Figure 7.4. The basic operations are the same as the 2D operations described in Section 6.2. First, the view is orientated by choosing an appropriate coordinate system from which to do the view mapping. As for 2D, this means defining a new origin called the View Reference Point and a new orientation for the axes to establish the view reference coordinates. In 3D, the three axes are labelled U, V and N for historical reasons rather than X, Y and Z. As in 2D, the scene described in view reference coordinates is mapped onto normalized projection coordinates. Whereas in 2D, this is primarily a window to viewport mapping that transforms the VRC scene to a picture in NPC space, in 3D a projective mapping takes place that effectively maps the 3D object into a 2D picture. (This is not completely accurate as the Z-values are retained to allow depth cueing and hidden line or hidden surface elimination to be performed if required. The object is mapped onto the plane but the Z-value is retained.)

The mapping can either be a parallel or a perspective mapping. In either case, a view plane is defined in an X-Y plane of the VRC coordinate system and the scene is projected onto that plane. As for 2D viewing, the definition of the coordinates of the View Window establishes the X,Y coordinates of the mapping to NPC space. Front and back planes are defined parallel to the view plane which define the Z-coordinates of NPC space. They are also used to limit the part of the scene that can be viewed. Only the unit cube of NPC space can be displayed on a workstation.

Similar to 2D viewing, setting the view representation can also define clipping limits which restrict the part of the NPC space that can appear at the workstation. The X-Y clipping limits are independent of the view window that establishes the NPC coordinates. In the Z-direction, the front and back planes both establish the coordinates in the Z-direction and act as clipping planes.

A projection reference point is defined which orientates the projectors defining the view volume. For perspective projections, all projectors pass through the projection reference point. For parallel projections, all projectors are parallel to the vector joining the projection reference point to the centre of the view window.

### 7.3 VIEW ORIENTATION

The utility function that defines the view orientation matrix is:

```EVALUATE VIEW ORIENTATION MATRIX 3 (VRX, VRY, VRZ, VNX, VNY, VNZ,
VUX, VUY, VUZ, ER, VMM3)
```

(VRX VRV VRZ) defines the origin of the view reference coordinate system called the view reference point. It is usual to define the view reference point as having some connection with the object or scene to be viewed. For example, the centre of the scene or a point on the surface of the main object in the scene. The view plane is defined on the N-axis (remember coordinates in the view reference coordinate system are defined as UVN rather than XYZ). Placing the view reference point in the scene to be viewed gives a much better chance of seeing the object to be viewed! It is all too easy in 3D to define a projection which does not project the object to be viewed onto the view plane! The N or Z-axis of the VRC coordinate system is defined by (VNX,VNY,VNZ) which defines a vector from the view reference point. Similarly, (VUX,VUY,VUZ) is a vector from the view reference point which defines the UP direction, the V or Y-axis. With the V and N-axes defined and, knowing that it is a right-handed coordinate system, this effectively defines the U or X-axis also. If the view orientation is well defined, the function returns a 4 x 4 matrix, VMM3, and the parameter ER is set to 0. A non-zero value returned in ER indicates that the matrix could not be generated. This would occur, for example, if the V and N-axes specified were defined as being the same.

##### Figure 7.5: VRC Coordinate system displaced from WC system

In the example in Section 7.2.2, the origin of the world coordinate system is at a distance from the object LEFT. The origin could be set at the centre of the scene by:

```      EVALVATE VIEW ORIENTATION MATRIX3 (50, 50, 50, 0, 0, 1,
0, 1, 0, ER, VMM3)
```

This is shown in Figure 7.5. The new origin is at the left-hand side of the letter F and half way up it.

In this case, the N-axis is parallel to the Z-axis of the world coordinate system. To view the LEFT scene from above or rotated would require the UVN axis system to be rotated. This is relatively easy to do in 2D as only one vector has to be defined to give the Y-axis. In 3D, two orthogonal axes need to be defined.

To aid the definition of such vectors, PHIGS provides the following utility function:

```TRANSFORM POINT 3(X, Y, Z, MAT3, ER, XT, YT, ZT)
```

Given a point (X,Y,Z) and a transformation matrix MAT3, the function returns the point (XT,YT,ZT) which is the point (X,Y,Z) transformed by MAT3. As usual, ER is set non-zero if the transformation could not be performed and zero if it was.

##### Figure 7.6: Rotated VRC coordinates relative to WC

Using BUILD TRANSFORMATION MATRIX 3 to define the transformation, known orthogonal vectors can be rotated to the desired position using TRANSFORM POINT 3. For example:

```      BUILD TRANSFORMATION MATRIX 3 (0, 0, 0, 0, 0, 0,
PX, PY, PZ, 1, 1, 1, ER, MAT3)
TRANSFORM POINT 3(0, 0, 1, MAT3, ER, VNX, VNY, VNZ)
TRANSFORM POINT 3(0, 1, 0, MAT3, ER, VUX, VUY, VUZ)
EVALUATE VIEW ORIENTATION MATRIX 3 (50, 50, 50, VNX, VNY, VNZ,
VUX, VUY, VUZ, ER, VMM3)
```

and PZ radians anti-clockwise with respect to the world coordinate X,Y,Z-axes. If PX is set to O.4*π, and PY, PZ set to 0, the axes would be positioned as shown in Figure 7.6.

### 7.4 VIEW MAPPING

The utility function that defines the view mapping matrix is:

```EVALUATE VIEW MAPPING MATRIX 3 (WL, PVL, TYPE, PRPU, PRPV, PRPN, VPD, BPD, FPD, ER, VMM3)
```

The position (PRPU,PRPV,PRPN) defines the projection reference point (see Figure 7.4) for the view mapping. VPD defines the position of the view plane by giving its distance along the N-axis from the view reference point. Most frequently, the value of VPD is negative so that the view plane is behind the scene to be viewed. The view plane is parallel to the U,V-axes of the view reference coordinate system. Note that it is not necessary for the projection reference point to be on the N-axis.

BPD and FPD define the back and front plane positions by giving their distances from the origin. If all the scene is to be viewed, these should be placed sufficiently behind and in front of the scene so that they do not remove parts of the scene. Their effect will be described later.

WL(1) to WL(4) define the limits of the part of the view plane (in the order UMIN, UMAX, VMIN, VMAX) that is to be made available to the workstation for display. It is important to remember that for parallel projections, the projectors are parallel to the line from the projection reference point to the centre of the view window. So, for example, if the projection reference point is on the N-axis and the projectors are required perpendicular to the view plane, the centre of the view window must be (0,0,VPD).

The mapping from view reference coordinates to normalized projection coordinates is defined by specifying the projection viewport limits PVL(1) to PVL( 4) in NPC coordinates that correspond to the window limits, effectively the window to viewport mapping. A difference from the 2D case is that the NPC coordinates are still three dimensional. The N-coordinates of points in the scene are retained and used to define the equivalent Z-value in NPC coordinates. The NPC Z-value of the backplane is defined as PVL(5) and the Z-value of the front plane is defined as PVL(6). The Z-value of the point (U,V,N) is (N-BPD)/(FPD-BPD). The U,V-values are defined by the projection of (U,V,N) onto the view plane and then the transformation of the U,V-coordinates to the NPC X,Y-coordinates using the window to viewport mapping. The Z-value is retained as it may be of use in any hidden line or hidden surface calculations. Finally, the parameter TYPE is set to PARALLEL or PERSPECTIVE to define the type of projection used in the view mapping.

An example of view mapping using the view reference coordinates defined before is:

```      EVALUATE VIEW ORIENTATION MATRIX 3 (50, 50, 50, 0, 0, 1,
0, 1, 0, ER, VOM3)
WL(1)=-60
WL(2)=60
WL(3)=-60
WL(4)=60
PVL(l)=0
PVL(2)=1
PVL(3)=0
PVL(4)=1
PVL(5)=0
PVL(6)=1
EVALUATE VIEW MAPPING MATRIX 3 (WL, PVL, PARALLEL, 0, 0,
1000, -300, -900, 300, ER, VMM3)
```

##### Figure 7.7: Head-on view of LEFT

The change of origin in the view orientation means that the scene LEFT extends from -40 to +40 in the U-direction, -10 to + 10 in the V-direction and -3 to +3 in the N-direction. The parallel projection with the projection reference point on the axis and the window limits defining the centre of the window on the N-axis and extending further than the extent of LEFT ensures that all the scene will be projected onto the window and will be mapped into the NPC unit square. This front-on view will produce the picture in Figure 7.7.

### 7.5 DEFINING A VIEW

The function that defines a view of a scene on a workstation is:

```SET VIEW REPRESENTATION 3 (WS, VI, VOM3, VMM3, VCL, XYC, BC, FC)
```

The view with view index VI on workstation WS is defined by the view orientation matrix VOM3 and the view mapping matrix VMM3. As for the 2D case, view clipping limits for the X, Y and Z-directions of NPC space can be defined by VCL(1) to VCL(6). Whether clipping is applied in the X, Y-directions depends on whether XYC is set to CLIP or NOCLIP. Whether clipping is applied against the VCL(5) lowest Z-value depends on whether BC is set to CLIP or NOCLIP and similarly clipping against the largest Z-value, VCL(6), depends on whether FC is set to CLIP or NOCLIP. For example:

```      VCL(1)=0.3
VCL(2)=0.7
VCL(3)=0.3
VCL(4)=0.7
VCL(5)=0
VCL(6)=1
SET VIEW REPRESENTATION 3 (WS, 1, YOM3, YMM3, VCL, CLIP, CLIP, CLIP)
```

if used with the matrices defined in the example in Section 7.4 will clip the projected picture of LEFT so that only the centre part is presented to the workstation for display (see Figure 7.8).

##### Figure 7.8: Clipping set to CLIP

The part of the T still visible has the right edge drawn creating a square. This is because the individual faces are being drawn as fill areas and the outline of a HOLLOW fill area is drawn including the edges created by clipping.