PRINCIPLES FOR PRODUCING COMPUTER ANIMATED MOTION PICTURES

Don Deily

PhD Dissertation in Electrical Engineering

University of Pennsylvania

1968

TABLE OF CONTENTS

Preface

This dissertation describes the process of producing a full-length documentary film, with sound and in color, by means of digital computer. Support for the project came from the National Science Foundation, and it has been directed by Professors John W. Carr, III and Morris Rubinoff. Although a great deal of development has yet to be done to reduce the delay and cost of production, we feel we have developed a practical method of mass producing computer animated films.

There are a great many people who, were involved in the development of the Movies system, and in the production of the Fields and Waves movie that was the end product. I wish to thank all of them for their enthusiasm and helpfulness. I would like to thank Professor John W. Carr, III, and the people at Calvin-De Prenes Studios, and in addition Tom Purdom, Bernie Everlof, S. V. Sankaran, Joel Katzen, 0d1le de la Beaujardiere, Larry Lieberman, Frank Manola, and Dan Callahan, all of whom worked very hard for the success of the film.

I wish to acknowledge above all the direction and help of Professor Morris Rubinoff. He had the breadth of vision to conceive of the Fields and Waves movie in the first place, and the willingness to see it through, despite many difficulties. He has been an exacting mentor, but always delightful and stimulating to work with.

I would like to thank Helen Millinghausen for the days of patient typing to produce this dissertation.

This work is affectionately dedicated to Betsy ... and to the memory of Rufus.

Glassboro, New Jersey October, 1968

1. Introduction

The study of computer animated film production techniques has been supported at the Moore School of Electrical Engineering since July 1, 1967, by the National Science Foundation. The funds were made available on the basis of a proposal originally submitted to the National Science Foundation by Professors John W. Carr, III, and Morris Rubinoff of the Moore School on June 29, 1966. The proposal, entitled PROPOSAL FOR SUPPORT IN THE AREA OF A MAN-COMPUTER PROCESS FOR ANIMATION OF FILMS, outlined the existing techniques then available, and gave the results that were anticipated as a result of two years of investigation and development. Five areas were described explicitly:

  1. Development of one- and two-dimensional languages for image manipulation.
  2. Development of flexible programming techniques for describing problem solution.
  3. Application of information retrieval techniques to aid in the synthesis of images.
  4. Application of time-shared on-line man-machine systems to the process of problems solution.
  5. Development of techniques for computer solutions to differential equations and other mathematical models of the physical sciences and engineering (numerical analysis).

This dissertation describes the process of film generation that has been developed to date using extensions of the available FORTRAN language routines, with the IBM 360/65 as the computer system. This covers roughly the topics listed above under 1, 2, and 3.

It should be noted that the work described here is by no means the only course of development being followed by researchers on this project. Work is also progressing in areas of explicitly graphically-oriented languages, and on a more direct man-machine on-line system. The work described here was directed primarily toward the development of techniques for specifying and producing a full length computer-animated color movie with a sound track. The test of these techniques was the subsequent production of the documentary movie "Electromagnetic Fields and Waves, Part I: Transmission Lines". Another basic objective was to discover and formulate general principles basic to computer animation and underlying the new techniques.

1.1 Background

The development of computer graphics devices enables a person who can describe an object mathematically to display it in any aspect with motion pictures. This can be done using a description in an algebraic computer language and requires no training in animation drawing on the part of the animator.

At the present time, computer animated documentary films are being produced at numerous locations, both in industry and in the universities. In the field of educational films, which is the type considered in this dissertation, a great deal of work has been done by the Education Development Center (EDC), Newton, Massachusetts, where Dr. James Strickland is Studio Physicist (Reference 1.1). EDC has produced films such as "Eulerian and Lagrangian Description", and Wave Velocities, Dispersion, and the α - β Diagram. Bell Telephone Laboratories has produced, for example, Force, Mass, and Motion by Dr, Frank Sinden, the Tumbling Satellite film by Dr. E. B. Zajac, and The L6 Language by Dr. Kenneth Knowlton. Work is also going on among other places, at Goddard Space Flight Center, MIT, Polytechnic Institute of Brooklyn, by Dr. Huggins at Johns Hopkins, by Dr, Don Weiner at Syracuse, and by Dr, Michaels, presently at Haverford College, Titles of some of the films produced by these institutions are given in the bibliography. Although a considerable amount of computer-generated film is also being produced by industry, it appears that little or none of this is oriented toward educational topics. A summary of the state of art can be found in the 1968 Year End Report of UAIDE Computer Animation Committee by Thomas F. Penderghast, Chairman (Reference 1.1).

At the beginning of this project, none of the literature published on the subject of computer animation provided a formalization of the techniques for specifying the flow of visual and sound sequences in a film. The idea of the storyboard is well known (Reference 1.2), but the efficient production of computer. animated films places more stringent requirements on the detail of information that must be supplied to the technicians who generate the film. The techniques developed for an explicit, well defined specification that are described in this dissertation constitute a language we call the Scenario Description Language. In an article by K. C. Knowlton and W. H. Huggins, concerning computer animation, given in the UAIDE report (Reference 1.3), the need for languages and standards is noted. The Scenario Description Language given in Chapter 2 is a major step toward such standardization.

The objective of this dissertation is to describe the principles and techniques which were derived and developed during the production of an instructional film on electromagnetic waves. The primary objectives here were to utilize all the equipment and procedures available either to the digital programmer or in the film laboratory, to produce a professional quality film economically. The results of our work indicate that, at least for currently available equipment, the combined technique described here is the most versatile and economical.

The topics that will be discussed may be summarized as follows:

  1. Techniques of movie script writing and visual action specification. This includes story-board specification of topics, cued script, synchronized color and sound action flowsheets, and object definition.
  2. Techniques of generating color film from black and white originals. Detailed description of the SC 4020 hardware and the way it generates film. The registration problem, techniques of occlusion, masking, shading, use of intensity settings. Other possible techniques considered, and hardware improvements which might be expected.
  3. Animation techniques, general, and computer animation techniques in particular. Included here are the universal conceptual camera with its ability to PAN, TILT, ZOOM in two- and three-dimensions, move around in three-dimensions, the use of cycling, freeze-frames, fairings; also the general attack and methods used to represent vectors, fields, and waves, with the problems encountered in both the static and dynamic situations.
  4. Techniques of programming, and the general system design for the MOVIES computer animation system: A: The basic programming philosophy--use of FORTRAN
    1. The MOVIES subroutines: Two-dimensional routines and three-dimensional routines for initialization, PAN, TILT, and ZOOM and the image plane.
    2. The library of image generating routines available, such as METER, WIRE, DRAW2L, DRAW3L, LINEL, LINEP, etc.
    3. The checkout system and philosophy Printer I/O, Calcomp routines, Checkout on disk for maximum economy
    4. The SAVE system. In core storage, On the disk, Stored library of images and image generating routines
    5. The high-speed I/O system, using a five-buffer parallel tape writing program
  5. Techniques of editing film
  6. A detailed description of the costs involved in generating a computer animated film, and possible ways it can be reduced.

2. Techniques of Movie-Script Writing and Visual Action Specification

There are numerous aspects of movie script writing which have to do with style and verbal writing skills. Although these are of great importance, they have been dealt with by specialists in the field of script writing, in particular in The Technique of Documentary Film Production by W. Hugh Baddeley, and this is a topic we need not develop here. In the writing of Electromagnetic Fields and Waves we used the services of a professional writer, Tom Purdom, to write the script and most of the narrative is the result of collaboration between Mr. Purdom and myself. The general method of approaching the problem of collaboration, however, is of importance, since it illustrates the first of the techniques developed for this project, the sequence story board. Figure 2.1 shows the form used in the writing of the fields and waves movie. The general procedure for specifying the movie begins on a set of these forms, where the originator of the story to be told fills in the top two items - first, the POINTS TO BE MADE, and second, sketches of each TYPICAL FRAME FOR THIS SCENE. This is intended to form the basis for the subsequent writing of the explicit narrative, which is written by the script writer under the supervision of the originator (whom we shall henceforth call the producer). The writing of the script proceeds with the producer and script writer working together. In the case of our movie, the general method was for me to give one, two, or many explanations of the particular point being made in the particular scene, after which Tom Purdom typed the narrative in rough draft and final draft form. The final draft of the narrative and description of the sound effects were then transferred to the section labelled SOUND TRACK. The nature of the scene was well established at this point, and a final decision was made as to the TRANSITION FROM PREVIOUS SCENE. The narrative description established the DURATION. OF THIS SCENE. The method of transition from scene to scene was a matter of style or emphasis in some cases, while in others it was dictated by the particular sequence of images required by the flow of action. In general, where there is a change in the set of objects displayed on the screen, the writer has the following choices:

Figure 2.1: The Form Used in the Sequence Story Board. A Typical Frame is Sketched in the Screen Outline
  1. Cut
  2. Fade out of previous scene, fade in of present scene
  3. Cross-dissolve
  4. Pan
  5. Tilt
  6. Wipe
  7. Zoom in or out (move closer or further)

(Definitions of these terms are included in a glossary at the end of this work.)

There is no best type of transition. Each has a place in the vocabulary of the scenarist, and each in turn can have a different effect by being given a different length; i.e., 16 frames, 24 frames, 32 frames, 48 frames, etc. As in the writing of the verbal script, the most interesting results are obtained by changing the style and pace of the transitions during the course of the film.

The overall flow of the narrative and visual action is obtained from the completed Sequence Story Board forms when they are physically connected together by a set of clear acetate envelopes which have been taped together. Figure 2.2 shows such. a set of forms. Dr Rubinoff has dubbed this versatile language form The Accordion. The accordion provides the nucleus from which the subsequent tasks of producing the film are derived. It is the means by which the various contributors to the production can measure their progress.

The production proceeds as follows: The accordion is given to the director, who is then responsible for bringing together all the various parts of the film, and hence is also responsible for breaking down the process into subtasks which are to be, performed by individual persons. To do this he must

1. Have the final narrative script transferred from the accordion to simple typewritten form on 8½" × 11" paper, with numbered lines, and no stage directions or extraneous material. This is the script which will be read by the narrator for recording onto magnetic soundtrack tape. This first step is necessary for the accurate specification of all the visual sequences, since they must be synchronized (or synced) to the narrator's voice. There is the possible alternate approach of generating the visual sequences, and. then recording the narrator's voice as he watches the final edited film being shown. This is exactly what is done. in fact, in the case of conventional documentary films which are made up of live action scenes shot on location. However, animation is normally done in the order we have used because it is obviously much simpler to generate the visual sequences synced exactly to a soundtrack already in existence, and it is logically simpler and less likely to waste film and effort.

Figure 2.2: The Accordion: Showing a Sequence Depicted by Typical Frames and Accompanying Narrative

2. Have the script recorded onto 16mm soundtrack tape and exact frame counts calculated between each cue on the script required for properly timing the visual action. This step is critical and should be explained in detail. The process of recording the sound track requires numerous takes and re-takes in order to get the proper emphasis and timing. This should be done in one continuous recording session (at the movie studio), with the director, writer, and producer all in attendance. The session is monitored by a recording technician who has a copy of the script. The narrator himself is in a soundproof, glass-enclosed room, with the group listening through a speaker, but their comments directed only to the recording technician. This protects the recording from being spoiled by extraneous noises. The technician has control of the tape recorder and can also communicate with the speaker through an intercom, so that he can transmit the comments of the listeners. As each section of the script is successfully recorded, the technician notes on his copy of the script which take was considered most satisfactory, and the next section is begun. After the script has been successfully recorded, the technician is given a second copy of the script. This one has been divided into a number of scenes, each of more or less short duration (two to three minutes or less). Each scene has had marked on it those words which the writer has designated as cues for some sort of action to begin or terminate. (See Figure 2.3). The technician then transfers the final recorded script to 16 mm soundtrack tape. This tape has perforations in exactly the same places as 16 mm film, and the transfer is made at the same rate of 24 frames per second as the film will be projected.

Figure 2.3: Sample of the Script Before the Cues Have Been Measured. Note that the lines of each page have been numbered in order to allow easy reference during retakes.

He then mounts the soundtrack on a playback machine which measures from the beginning exactly how many frames have passed to any point on the track. By listening to the soundtrack, and stopping it at each desired cue, he can then determine exactly how many frames separate each cue in the script. He notes these on the marked copy of the script and returns it to the director. This is the cued script and it forms the basis for the timing of the images to be generated by computer. (See Figure 2.4).

Figure 2.4: The Cued Script. Note that the cues are each measured in feet (') and frames (x) from the beginning of the scene. The first cue is always taken as beginning at 2 feet, 1 frame (2' 1x)

3. Specify the exact sequence of frames to be produced by the computer, one filmstrip being generated for each color to be printed. The exact process for printing the color images will be treated later, but it should be understood that each image must be broken down into the component colors that make it up, and each such component must be generated separately. The final color film will be created by printing on the same piece of color film each of the component images (which have so far been printed oh black and white film) through a filter of the appropriate color. This specification ls done by the director on the SYNCHRONIZED COLOR AND SOUND ACTION FLOWSHEET. Figure 2.5 shows a sample of such a flowsheet.

Figure 2.5: Example of the Synchronised Color and Sound Action Flowchart (OD refers to a numbered Object Definition; TC refers to a Timing Chart)

In order to specify the images exactly, he supplements the flowsheet with numerous OBJECT DEFINITIONS. (See Figure 2.6). These are set up on a form which has a square film image plane. Here the director is able to sketch the exact image he requires, with notations giving exact dimensions and giving equations for the images which are to be specified by an analytic formula. Here we see the power of an algebraic language, such as FORTRAN which allows the definition of analytic functions of any number of variables, including time. The director is therefore able to define each object as a function of time. The same object definition then suffices as the definer of any number of frames. These frames are generated with time incremented by 1/24 second per frame, and the resulting sequence naturally shows the defined object changing in accordance with the prescribed motion given by the definition,

Figure 2.6: Example of an Object Definition. Giving Exact Specification of the Red Image to be Generated for Scene 3

It is true, of course, that there are numerous sequences required by the scenarist which cannot be completely described in any of the forms listed above. Thus, another technique for prescribing the sequence of images to appear is the FRAME SEQUENCE TIMING CHART. (See Figure 2.7). This chart is necessary particularly when the sequence to be shown does not lend itself easily to an analytic description. An example of this was the sequence of frames used to denote the sinusoidal variation of intensity of the electric vector field between two wires. This sequence consisted of four sets of thirteen frames each. The first thirteen began with a blank frame and in each succeeding frame a number of additional positive field lines (downward pointing arrows) were added until the thirteenth frame was reached. This had the maximum number of positive field lines, and then the number was again reduced over the next thirteen frames and the cycle repeated, up and down, in the next 26 frames with negative arrows. The entire sequence was thus 52 frames long. Except for a verbal description, the best way of describing this sequence for the benefit of the programmer was to define each image in some manner (in this case, by listing the lines it included from a complete list of all lines), and then to give a timing chart of the order in which they should appear.

Figure 2.7: Timing Chart foe Electric Field Intensity Variation Caused by Voltage Difference Between Two Wires

Another application of the timing chart language was the specification of how a fixed number of frames should be used to generate both backward and forward moving sinusoidal waves. In this case, (see Figure 2.8), the original moving wave sequence consisted of 18 frames. When these frames were drawn in increasing order, image 1 to 18, the resulting picture was of a sinusoidal wave moving from left to right. There were a number of scenes in which a similar wave was required to move from right to left (denoting a reflected wave) at the same time. In this case the same set of 18 images was used, but with the images being drawn in reverse order: starting with image 18 and counting down to image 1. This seems simple enough, but it was also necessary to maintain the proper phase relationship between the two images, since the reflected wave had to be exactly 180° out of phase with the incident wave at the reflecting barrier in one scene, and exactly in phase at the reflecting barrier in another scene. The most illuminating way of prescribing the relationship between the two sets of images, both for the director and the programmer, was the use of timing charts. In the case of the out of phase wave, the reflected wave started its 9th image when the incident started its first. In the case of the in phase reflected wave, the reflected wave was on image number 1 at the same time as the incident wave, but then went to image 18, then 17, etc. (See Figure 2.9).

Figure 2.8: Timing Chart for Sinusoidal Incident and Reflected Waves with ρ=+1
Figure 2.9: Timing Chart for Sinusoidal Incident and Reflected Waves with ρ=-1

Some comments are in order comparing the Scenario Description Language (SDL) and existing storyboard techniques. The most detailed account of hand animation techniques found by this writer was in The Techniques of Film Animation by Halas and Manvell. The original idea for the Color and Sound Action Flowsheets came from this book. Three of the basic features of both SDL and hand animation have a loose parallelism: The accordion and the storyboard, the Color- and Sound-Action Flowsheet and the workbook, Object Definitions and character sketches. In the case of hand animation, the workbook is a form that gives the narrative flow on a timing basis, similar to the cued script. However, the precise specification of the visual sequence is not given until the so-called key animator (the senior, directing artist) actually draws the primary frames for each sequence. In the case of hand animation, the characters involved in the story are given form by the key animator, who makes drawings of each character, giving the proportions among its various parts and relative size compared to the other characters. The production artists constantly refer to these sketches as they create the visual sequences.

Computer animation allows the efficient combination of the rather diffuse specification techniques of hand animation. This is so because the creator of the computer animated sequences can invoke the images required, including their motion, by specifying the proper combination of Object Definitions, Timing Charts, and camera motions in exact numerical terms.

The Accordion itself has certain advantages over the usual storyboard, which is usually either in book form or on a large board. The Accordion can be cross-referenced by simply folding out the two parts to be compared so they are both visible at once. (An advantage enjoyed by a large storyboard also, but not a book.) The acetate envelopes can easily be separated, and new sequences and more detailed breakdowns taped between'. (An advantage of a storybook, but not a storyboard.) The standardized format combines all specifications required for technical and stylistic reviews of the film which should be made at this stage before the narrative is recorded and programming production begun. The acetate envelopes allow any panel of the Accordion to be replaced easily.

The Color and Sound-Action Flowsheets combine the complete specification of both the sound and visual sequences. As was stated above, this cannot be achieved in the case of hand animators' storyboards because the visual specification requires explicit drawings by a skilled artist. The computer animator, however, can do this, and is in fact required to do it, as the scenario is specified.

The Object Definition and Timing Chart, although a logical extension of the character sketch, give full definitional capability to the creator. This capability is clearly not available to the hand animator, since every one of his sketches is only a single example. In contrast, not only the complete and precise spatial specification of each object or character is allowed to the computer animator, but the object's exact temporal behavior.

It should be noted finally that the applicability of Scenario Description Language is by no means limited to digital computer animation. It can be applied to production using any device capable of handling mathematical definitions (e.g., analog or hybrid computers), and even hand animators in situations where such definition is useful.

3. Techniques of Generating Color Film from Black and White Originals

The basic tool for producing computer animated films for this project was the Stromberg-Carlson SC 4020 Printer-Plotter. The SC 4020 is a system consisting of a seven-track magnetic tape-reading device, an input storage buffer (the one at Polytechnic Institute of Brooklyn (PIB) holds 4096 six-bit characters), instruction-interpreting logic, and a camera and cathode ray tube assembly housed together in a light-tight assemblage. Figure 3.1 shows a schematic diagram of the 4020.

Figure 3.1: Schematic Diagram of the Stromberg-Carlson SC 4020 Printer-Plotter Showing Major Components and Flow of Instructions

The cathode ray tube (CRT) and the camera are under the control of instructions read from the tape and stored in the buffer. The instructions are 36 bits long and have various formats depending on the particular operation to be performed. Reference 3.1 is the user's manual of the SC 4020 and contains a detailed description of the instructions. The general capabilities of the CRT are drawing lines (or vectors - hence, vector generation) and a selected set of characters of fixed size on the face of the tube. When the machine is in operation, the camera records the images formed on the tube face. The shutter stays open continuously, so that the user of the machine (hereafter referred to as the programmer) can input a string of instructions of any length, and an image of any complexity can be built up on each frame of film. Instructions are executed at different rates for different instructions, but the most important instruction, the vector generating instruction, executes at the rate of about 1000 instructions per second. Figure 3.2 shows the vector generating instruction format. It can be seen that the programmer can start the vector at any address on the tubeface in both the X-direction and the Y-direction from 0 to 1023 (i.e 210)) and extend the vector in both the X- and Y-direction for 63 positions, or rasters as they are called. This vector generating instruction is the primitive component from which all of our images were generated. There is a character set of a fixed number of different characters which are extruded from a metallic grid. (See Figure 3.3). These characters are of little use to us, since they are fixed in size (and very small) and they have a fixed orientation and format. The character set is intended primarily for the recording of alphabetic data on microfilm - the printer part of the printer-plotter. We shall assume henceforth that all lines drawn, including those used to generate characters, are drawn exclusively by the vector generating instruction.

Figure 3.2: Format of the Vector Generating Commands of the SC 4020
Figure 3.3: Charactron Shaped Beam Tube

The camera itself is fixed permanently with respect to the CRT. The only control of the camera directly available to the programmer is the instruction to advance the film to the next frame. He cannot rewind the film, nor move the physical camera in any way. Theoretically, the programmer can select any of three devices to print his image: a 35 mm camera, a 16 mm camera, or a roll of photographic paper 9 inches wide. In practice, however, PIB would only place one of their cameras in the printer for a given run. This allowed the camera chosen to be bolted into the machine, and its registration with respect to the CRT fixed with maximum precision. This precise registration is of critical importance to the generation of high quality color movies, as will be seen shortly.

3.1 The Process of Generating a Filmstrip by Digital Computer

I shall assume from the start that the ultimate objective of the user of this volume is to produce a 16 mm motion picture, either sound or silent. Here we show the principle steps in using a digital computer and the SC 4020. First it should be noted that silent 16 mm movies are shown at the rate of 16 frames per second, and sound 16 mm movies are shown at the rate of 24 frames per second. This immediately fixes the rate of time flow from frame to frame. In addition, it should be noted that animators sometimes keep each drawing of an animated sequence on the screen for two frames, at least in the case of sound film. In the case of conventional animation this is accomplished by having the animation photographer simply take two frames with his camera each time a new drawing is placed on the animation camera stand. This is called photographing on twos. (See Reference 3.2 concerning the Techniques of Animation.) This is done if the motion seen by the viewer is sufficiently smooth, and one can see that this results in a 50% reduction in the number of frames which must be drawn by the animator. Since this is at least 25% to 30% of the cost of the finished product, it can be seen that considerable savings can be achieved by this method. The general process of generating a strip of film is to generate a sequence of SC 4020 instructions on a digital computer which consist of vector drawing and frame advance instructions. These instructions are then written onto seven track magnetic cape. The tape is mounted on the SC 4020 tape reader and it proceeds to read the instructions and print the required lines on film. The film is developed immediately at the same installation and the resulting movie is returned to the programmer.

The programmer has other aspects of the hardware indirectly under his control, and these become important when he must generate more complicated effects than simple line drawings. First of all, of course, he must indicate to the operator of the SC 4020 which type of film (16 mm or 35 mm) he wishes the images to be printed on. Second, he can choose the intensity of the CRT beam at the beginning of each tape. The intensity of the beam is set by a dial on the SC 4020 control panel by the operator. Once the run has started he cannot change it, since it is within the light-tight box, Third, he can designate the aperture setting of the camera. This is usually set to F 5.6.

It should be noted also that the phosphors on the CRT are monochromatic, and the film used in both the 16 mm and the 35 mm cameras is black and white film sensitive to the particular region of the spectrum of the CRT phosphors. The film is developed in the usual black and white reversal process used for movie film, so that the resulting images appear as white lines on a black background.

Therefore, with the present configuration of the SC 4020 at PIB, it is impossible to generate color images directly. (There is an installation where this is possible located at Sandia Associates in Albuquerque, New Mexico, which will be discussed later, but its use is restricted and, apparently, expensive.) The line drawn by the CRT on the film naturally has a finite width. This width depends on the intensity of the CRT and camera setting. The SC 4020 is usually operated with the CRT set at 3.4 and camera at F 5.6. The intensity units are those appearing on the dial on the CRT control panel. We have not determined their exact meaning, and it does not seem relevant here. At this setting, the line width should be approximately four rasters on 16 mm film. Figure 3.4 shows the approximate shape of a single line drawn on film. In the case of 35 mm film, the line width is somewhat less than three rasters.

One of the special effects we have needed for the production of our movie was the shading in of areas so that they appeared all white on the film. This requires that the 4020 draw a series of lines through the area to be shaded, one very close to the next, so that the entire area is exposed to light. If the intensity is set properly, the lines will just overlap, and the area will be all shaded in. Warning: This technique of shading is fraught with pitfalls. Our experience has shown that if the intensity of the CRT beam is too great (the normal intensity of 3.4 seems a bit high), the light from the beam will fall outside the shaded area. This will produce an effect called halation. The shaded area then appears to have an ill-defined edge around it, in which there is a grainy, out-of-focus haze. Halation can be avoided by reducing the intensity of the beam to 3.2 on the CRT. Our experience to date has been that at this setting the shaded area is well defined; the film is uniformly exposed, and the emulsion is entirely clear on 35 mm, film, which is the most desirable state for subsequent printing onto color film. However, if the intensity is reduced to 3,0, for example, or the aperture reduced much below F 5.6, the line width is reduced to such a level that the individual lines can be distinguished and lines drawn outside the vicinity of other lines disappear almost entirely. One must navigate here between Scylla and Charybdis. It is suggested that a short experimental film be generated testing the particular shading sequence at different intensities before a final copy of the film is printed.

Figure 3.4: Approximate Shape of a Line Drawn on Film by the SC 4020

3.2 The Technique of Generating Color Film

We can now describe the process of generating color film from black and white prints in somewhat more detail. To begin with, let us define our terms. We shall refer to the black and white (or B/W) movies generated by the SC 4020 as the ORIGINALS. They are the original filmstrips produced, The color film which results from printing several of them together on one strip of color film, we shall call the COLOR MASTER. Finally, any film which has not yet been developed and hence is still sensitive to light we shall call RAW STOCK.

The process then goes as follows: Each B/W filmstrip is generated with a large X printed on the first frame. This is followed by the frames containing exactly the lines to be printed for that color, each frame following in a sequence measured from the synchronizing X frame. Frame counts here are critical, since the motions of objects which are made up of several colors can be printed over exactly the right frame only if they have all started from the same benchmark. Figure 3.5 shows the format of a typical set of B/W filmstrips to be printed into composite color. It is highly recommended that all the B/W filmstrips intended to be merged onto the same piece of color master be generated on the SC 4020 at the same time, one following the other. This will reduce the variation of the locations of images due to changing cameras, drift in CRT electronics, ordinary change in the equipment due to wear and tear, etc. In fact, the programmer should make sure that all the B/W originals for one color master be printed on the same reel of raw B/W stock, in order not to risk the variations caused by changing the film cans, and changing the characteristics of the raw stock itself. Needless to say, it is essential that a particular B/W original be printed on one continuous strip of filmstock. This is necessary for the subsequent printing to generate, the color master.

Figure 3.5: Typical Set of Black and White Images Printed Through Filters to Make a Composite Color 16 mm Film Image

Once the B/W originals have been obtained, they are submitted to the professional film studio for the composite print. This is done on an aerial-image animation stand. Figure 3.6 shows a simplified diagram of an aerial-image camera. In our case, we make use of it only as a multiple-pass printer. It is possible to place transparent acetate cells onto the table in the middle of the apparatus, but this is a feature we have not found economical, nor particularly necessary to use. The aerial-image camera, however, gives us one overwhelming advantage. The user can load a reel of raw film in the camera and photograph an exact number of frames of an original filmstrip which has been loaded into the motion picture projector at the bottom. He can then rewind the raw film back to its exact starting point, reload a different original filmstrip, in the projector, and photograph the images on the second filmstrip for the exact same number of frames. All parts of the aerial-image system have been machined to maximum accuracy, so that the images from the different original filmstrips are projected in exactly the correct position with respect to each other on the raw stock. In addition, since the original filmstrips have all been loaded starting from their synchronizing X frames, the proper images printed on the proper frames. Finally, the projector which projects' the original filmstrips can have color filters mounted on its lenses. Hence, we can project each image through a filter corresponding to its intended color, and the image on the raw stock is the desired color composite, This is the basic principle of our method of color motion picture generation.

Figure 3.6: The Oxberry Aerial-Image Animation System

3.3 The Problem of Registration

We have one more problem of major significance to cover before leaving the topic of color film image generation, Registration of an image on a frame of film is defined as the positioning of the coordinate of that image with respect to the coordinates of the film frame. If the registration is not accurate, the image will appear to jump and weave across the screen when the filmstrip is projected. All movie cameras and projectors have some means of positioning the film more or less accurately. A detailed description of the various mechanisms is given in Reference 3.2 (Special Effects). What we are interested in here is a general comparison of the methods used for 16mm and 35 mm film. In the case of 16 mm film, there are only two perforations per frame of film. (See Figure 3.7). The film is positioned in front of the aperture by means of two claws which pull each succeeding frame down. The film is held in position during exposure by means of a pressure plate while the claw returns upward to get the next frame. On the other hand, 35 mm film has four perforations per frame (see Figure 3.7).

Figure 3.7: Relative Sizes of 16 mm and 35 mm Film Images and Position of Sprocket Holes

In addition, the film is threaded through rollers which turn continuously: each succeeding frame is brought into position by the rotation of these rollers, which are driven by accurately machined gears. The film is then held in position by two precisely set pins called pilot pins. The contrast is obvious. The registration of images from frame to frame is at least an order of magnitude better in the case of 35 mm film than it is in the case of 16 mm In addition, it is obvious that even if exactly the same dimensional tolerances were possible with 16 mm as with 35 mm, which they are not, the same error would appear to be about twice as great on the 16 mm film, since the image must be magnified twice as much to appear just as large. The problem of registration is particularly acute in our case, since we are printing several images onto each frame of master film. Our experience has shown that the 16 mm originals from the PIB SC 4020 have poor registration accuracy. When two filmstrips were printed onto a 16 mm master color filmstrip, the two images jumped and weaved with respect to each other and the frame aperture, giving the appearance of one color swimming with respect to the other. The motion was on the order of several percent of screen width, and completely unacceptable.

We then tried generating 35 mm originals at PIB. This costs 5 cents per frame more than 16 mm or about $10 extra per minute. Naturally, the most desirable procedure would then be to print the 35 mm originals onto 35 mm color film. This turns out to be both difficult and expensive. First of all, the logistics in our case would have been prohibitive. No photographic concern in Philadelphia is equipped for 35 mm work and we would therefore have to go to New York. (Dealing with one place in New York, namely PIB, is already considerable logistic trouble!)

Secondly, although 35 mm black and white film is not too much more expensive than 16 mm, 35 mm color film is several times more expensive. (See section on costs.) Finally, our fundamental objective is to produce a 16 mm documentary for distribution to schools and universities. Thus we would have to introduce a third step of reprinting the 35 mm master onto a 16 mm master - thus incurring still more cost and some loss of accuracy. Instead, we had the film studio print the 35 mm originals onto a 16 mm color master on the aerial-image camera. This is no more difficult than doing so with 16 mm originals, since the originals are simply projected into the camera.

The results were an order of magnitude better than those obtained with 16 mm originals. Although there was still a small amount of respective motion, the images were considered acceptable and we have printed our first film by this method. This leads to the conclusion that the 16 mm camera used in the SC 4020 has considerably lower registration accuracy then the camera used in the aerial-image This may be because it has inherently less accurate design, or because it has not been as well maintained, or is worn. Whatever the reason, we have essentially no control over the equipment at PIB and we must, perforce, use their 35 mm camera.

3.4 The Technique of Occlusion

The technique of printing composite images as described above lends itself to an economical solution of the occlusion problem in some cases. The principle depends on the characteristics of the color reversal film used in the generation of the color master print. (See Reference 3.3, Color As Seen and Photographed). The technique depends on the fact that white light causes all the dye layers of the film to be washed away in development, leaving only clear film (hence, white). A white image automatically occludes all parts of images printed in other colors in the same image area. An application of this can be found in the scene in which we show the cross-section of an electric field between two wires. The wires themselves were denoted by two white circles completely shaded in. The electric field lines were denoted by red lines that passed out of one wire into the other along the lines of the force field. The field lines were required to terminate at the edges of the wires. However, it would have required considerable extra analysis and programming to compute the exact point of intersection for each line. Moreover, perfect color registration would be required in the aerial-image camera. Instead, the lines were simply drawn to the centers of the wires. (See Figure 3.8).

Figure 3.8: Example of the Solution of the Occlusion Problem by Color Overprinting

When the two original images were printed together onto the color master, the white of the shaded wires automatically occluded the unwanted parts of the field lines and provided perfect registration.

3.5 Other Possible Methods of Generating Color Film

Naturally, the techniques for generating color films hardly exhaust all possible methods which exist. In particular, there are two other methods available for generation of computer-animated film, Sandia Associates' use of a color filter wheel in a SC 4020 and a color phosphor CRT developed by RCA and Air Force Cambridge Research Labs.

The color wheel system used by Sandia is a custom-built installation. (Reference 3.4, Sandia's papers). The color wheel itself is mounted between the CRT and the camera. Sandia has also installed a CRT with a special white phosphor. (The ordinary one is only blue.) The camera, of course, is loaded with color film. The programmer has control over the color wheel by means of several instructions which were originally intended to control some auxiliary hardware that does not exist in their particular installation. The wheel can be reset to a neutral (clear) filter position by one instruction. The programmer can then advance the wheel consecutively to red, blue, or yellow filters in that order by means of a second instruction. This in combination with the reset instruction gives the programmer a well defined control over the color filters. A third instruction is executed after every color wheel instruction to provide a delay before the execution of any other instructions so that all transients in the hardware have died out. Although we have not tried the machine ourselves, we are informed by Sandia that the density of the filters requires that lines of any color must be overstruck eight to ten times to get a visible line on the film. From our point of view, since we want to generate thousands of frames of film and computer time is one of our main costs, this would raise the cost of generating the magnetic tapes considerably. In addition, the machine is owned by the AEC, and financial and logistic problems involved in using their hardware (in New Mexico) appeared too great. We decided not to try this approach to making color film.

The color CRT developed by Air Force Cambridge Research Labs was made known to us through a film they distributed to the Moore School in the Fall of 1967. It demonstrated movies made using the scope and gave some general description of the instructions available for programming it. The machine is evidently an experimental prototype. Since, even if it were possible to use it, which seems doubtful, we would have had to develop software for it from scratch, it was decided very early not to pursue this possibility. Mention of its existence is included only for completeness.

4. Techniques of Digital Computer Animation

This section is intended to give an exhaustive description of the techniques that were discovered or developed for representing physical events on film by means of a digital computer. Some of the methods are new, but a great many more are ideas borrowed from conventional animators. There is considerable overlap between the techniques discussed here and in the next section covering the techniques of programming. The general idea in this section is to introduce the concepts and show why things were done as they were done. The next section is the how to of this section's Why, The ideas described here should be firmly grasped before the user begins writing film scripts.

4.1 The Conceptual or Virtual Camera

One of the most fundamental concepts of our system of image generation is the notion of the virtual camera, This is, as implied, a camera which exists only in the mind of the person programming the computer. In the computer itself it consists of a set of limits which constitute the field of view of the picture. In addition, in the case of scenes taken of three-dimensional objects, it consists of the set of coordinate transformations required to draw all objects in proper perspective on the film. We shall begin with a description of two-dimensional camera definitions, since many of them are used in turn by the three-dimensional camera.

4.2 The Two-Dimensional Virtual Camera

Figure 4.1 shows the basic scaling of the film plane defined by the virtual camera. Since most of the concepts to be displayed are defined analytically, or at least mathematically, the screen is best thought of as an area of graph paper with the origin at the center. The standard limits of +1.0 to -1.0 for each axis is somewhat arbitrary. However, my experience has been that a normalized limit is often convenient in programming, since one tends to use ratios to define limits, and +1.0 is the naturally defined normal ratio. In any case, it is easy for the user to change the limits, as will be seen shortly. The limits of the virtual camera are defined as a square because this is the shape of the SC 4020 tubeface itself.

Figure 4.1: Basic Scaling of the Camera Image Plane

The primary utility of the virtual camera is that it is defined independently of the objects that the programmer draws with the computer. Thus the programmer can define a picket fence as a list of lines to be drawn by some subroutine. The subroutine will draw all the defining lines (by means of a subroutine to be defined later), but only those lines which would appear within the field of view of the camera will actually be drawn. (See Figure 4.2). The programmer can set the positions of the fence and the camera independently - with the possibility of defining their motions in any way. For example, they can both be functions of time, and include motions in the X-direction, the Y-direction, rotations, and, in the case of the camera, changes in scaling.

Figure 4.2: Example of Object Defined by Subroutine and Photographed by the Virtual Camera

Although the possibilities for varying the position of the camera arc endless, they can be divided into the six possible degrees of freedom of any rigid body - that is, rotation about any of the three body axes, and translation along the three axes. In the case of the three-dimensional virtual camera, all six motions are possible. In the case of the two-dimensional camera, three motions are possible and we define these as:

PAN (panorama motion)

motion in the plus or minus X-direction.

This is actually accomplished by changing the camera center X coordinate under the control of the programmer, who specifies a velocity of translation.

Note: In the lexicon of movie photographers, a pan is actually a rotation of the camera about its vertical axis. Horizontal translation of the camera is referred to as a truck. However, a rotation is not defined in the case of the two-dimensional camera, and since the pan is a much more common stage direction, we have adopted the terminology defined above. This is not entirely arbitrary. Our pan accomplishes about the same effect as a rotation. Finally, conventional animators abuse the terminology in the same way. When they make a horizontal sweep of a flat painted backdrop scene, they call it a pan.

TILT (tilt camera up or down)
motion in the plus or minus Y-direction. This is a change in the camera center Y-coordinate under the control of the programmer. The same comment applies as in the case of pan: Tilt is actually a rotation of the camera about its lateral axis, but we use the animator's terminology.
ZOOM (change the field of view of the camera)
is a change in the size of the area seen on the film plane. For example, as the size of the field of view increases, more area is included in the picture, and the plotted sizes of all objects becomes smaller and smaller.

4.3 The Three-Dimensional Virtual Camera

The camera is defined to be at the center of its own right handed coordinate system with the lens opening aimed along the -Z axis. (See Figure 4.3). This strange choice was made for a sound reason: The X-Y plane formed on its film has the same orientation as the X-Y plane in the SC 4020. We generate the perspective projection of the subject on an imaginary infinitesimal square film plane two units on a side at the origin of the camera coordinates. The camera field of view is determined by the angle called θf.o.v (theta field of view) is the angle between opposite sides of the pyramid formed by the picture plane and the camera coordinate origin. Clearly, the greater the angle, the more space is included in the field of view. If a rectangular (non-square) picture is required, the X and Y field of view angles can simply have different values. The standard 50 mm camera lens has a total field of view angle of about 20f.o.v = 55°. For convenience in computation we shall take 20f.o.v as 6o°, hence θf.o.v=30°. A zoom in on an object is accomplished by reducing Qf θf.o.v, zoom out by increasing it.

Figure 4.3: A (Presumably Microscopic) View of the Camera Picture Plane Onto Which All Points Are Projected

4.4 Perspective Projection of a Given Point

We now consider the technique of finding the coordinates on the picture plane of the projection of an arbitrary point in the camera's field of view. We wish to project the point onto a film plane, which is a square with (0,0) at the center, (+1,+l1) at the upper right, and (-1,-1) at the lower left. (See Figure 4.1). To find the X-coordinate, we look down the Y-axis at the location of the point on the X-Z plane. (See Figure 4.4). The location of the projection onto the X-coordinate of the picture plane is determined by the equation:

Figure 4.4: Geometry of the Perspective Projection of a Point in the Camera Field of View

4.5 Definition of Coordinate Systems

In the most general case we will be dealing with three coordinate systems:

  1. fixed reference coordinates
  2. subject (or body) coordinates
  3. camera coordinates

All objects to be graphed will be given their primitive description in their own body coordinates. For example, an object named RPRIS might be a rectangular prism given as a list of eight edges in its own three-space. (See Figure 4.5). Other means of describing it (such as giving its bounding planes) are possible, but in any case we would expect the definition to be given in a set of coordinates fixed in the body, so that these will be invariant as it is moved about in the reference space.

Figure 4.5: From body to camera coordinates via the fixed reference coordinates

The camera coordinates, as stated in the previous section, are fixed to the camera. In order to form the image of a solid object located in front of the camera it is first necessary to express the defining points of the body in camera coordinates. Since we want to be able to move the camera about, we must define a third set of coordinates which are always fixed in space. We can then transform from body to camera coordinates via the fixed reference coordinates. (See Figure 4.6).

Figure 4.6: Typical Spatial Relation Between the Three Coordinate Systems

In order to transform from one rectangular coordinate system to another it is necessary to specify six independent parameters (called also degrees of freedom) relating the two systems. We will use the simplest: a rotation (defined by three Euler angles) followed by a translation (defined by a 3-vector). (See Reference 4.1 for a general description of coordinate transformations).

4.6 Transformation from Reference to Body

4.7 Transformation from Reference to Camera

We assume that three rotations are made from reference to camera in exactly the same order as those from reference to body, and we label these φc θc ψc. Thus the rotation matrix from reference to camera is

4.8 General Perspective Projection of a Vector

Using the above tools we can immediately obtain the perspective projection of a line between two arbitrary points in a solid object.

It can be seen immediately that the three-dimensional virtual camera definitions allow all the possible camera motions normally defined in movie productions.

A set of subroutines for specifying the motions of the camera and up to ten independent body coordinate systems have been written by Joel Katzen. A detailed description of these subroutines will appear in a Master's Thesis now being written by Mr. Katzen. (See Reference 4.3).

4.9 Fairings

One of the primary objectives of the animator, particularly the animator of an educational documentary film, is to render an accurate representation of natural phenomena. This is true even of such studios as Walt Disney's, where considerable research into physical reality is made before that same reality is butchered. One of the most fundamental natural laws that must be recognized by animators is the fact that all objects have mass and must obey Newton's laws. The direct consequence of this is that when objects change speed, including going from rest to moving or vice versa; they must undergo an acceleration. If objects are moved from one place to another on the screen simply by giving them a sudden instantaneous velocity, the effect appears jerky and confusing to the viewer. Animators therefore have developed the technique of slowly increasing the rate of movement of an object as it starts to be moved, and similarly decreasing its movement slowly when it comes to reat again. This technique is referred to as adding FAIRINGS to an object's motion. In general, the logic required to add an acceleration to the change in speed of an object being drawn is not logically difficult when a digital computer is being used to draw the object, particularly when its position and orientation can easily be controlled by means of an algebraic computer language, as will be discussed in the next chapter. Nevertheless, the requirement of adding fairings is something which must constantly be borne in mind as the detailed script is being written, and it does give an annoying little problem to be solved in each situation that arises.

There are situations that will arise where the animator will want to make the fairings very short - to show rapid motion, or jerky motion intentionally - and situations where the fairings will be required to be slow - as in the case of a heavy object being accelerated, In general, for our purposes however, objects should accelerate and decelerate over a period of 16 frames. Figure 4.7 shows the time history of the velocity of an object starting from rest and ending at rest. Acceleration at both ends is constant over the 16 frame period, and zero otherwise. (Surprisingly, a much more elaborate scheme for adding fairings is described in The Technique of Film Animation (see Reference 4.3) where a versine function is described. However, we have found the linear acceleration technique to be quite satisfactory.) Sixteen frame fairings have been included in the beginning and ending motions caused by the virtual camera routines PAN, TILT, and ZOOM.

Figure 4.7: Time History of the Velocity of an Object Changing Position vith Fairings

Whenever the programmer or director adds fairings to the motion of an object, he must make allowance for the fact that the final values of all parameters affected will lag eight frames behind the values that would be expected if the parameter had started changing at the rate designated once acceleration has stopped. This problem is taken care of automatically by the camera routines, whose exact calling sequences are given in the next chapter.

4.10 Cycling and Freeze Frames

Another one of the basic facts of physical phenomena is that they are often cyclic or repetitive in nature. This is particularly true of the majority of useful and interesting phenomena. In the case of our fields and waves movie, most of the topics that were treated in detail concerned cyclic situations, and in particular, sinusoidal waves. Since the basis of good pedagogy is repetition in any case, we found it necessary to show a considerable number of cycles of each phenomenon as it was discussed. When a particular point was to be made or emphasized, the motion on the screen was stopped for several seconds while the particular object of interest was pointed out, encircled, or whatever. It can readily be seen that this sort of situation lent itself readily to the possibility of saving a considerable amount of computation time by means of storing and manipulating images which comprised these cyclic phenomena.

Obviously, this possibility is one that hand animators have used themselves extensively for decades, It is instructive to investigate first the techniques they have developed to take advantage of cyclic phenomena and backdrops,

The basic camera tool of hand animators ia the aerial image animation stand. Figure 3.6 shows the general features of this machine. There are two means of getting images onto the raw film in the camera, which is positioned at the top of the stand, as shown. The first is by means of the animation table which is directly below the camera. Artwork is rendered in opaque color inks or paints onto sheets of clear acetate. These sheets are called cels and are properly positioned or registered under the camera by means of three holes which lock onto three pins in the animation table. The cel is illuminated by means of the lamps shown, and the shutter of the camera is tripped the required number of times by the camera technician. Every frame of an animated film must be drawn and photographed separately in this manner: Each cels placed, photographed, removed, and the next one placed and photographed, etc. As might be imagined, this work is relatively slow. We were told by Calvin-De Frenes studios, for example, that they had done an animated documentary film for the Navy describing operating principles of a diesel motor. After the drawings were made, which took several months in itself, the single process of photographing the animated drawings on the aerial animation stand took two months. The length of the documentary was about thirty minutes, which means it involved about 43,000 frames of film. Because hand animation is photographed in twos - that is, each drawing is left on the screen for two frames, the film required a maximum of 22,500 drawings, We can note here, however, one of the tricks used by animators. During a portion of the film the action of the motor was shown in cutaway view. This involved several different components that went through cyclic motions. Calvin-De Prenes studios were able to economize on the number of drawings required (although not on the number of times the drawings had to be positioned and photographed) by having the drawings of the cycle photographed repetitively. However, the various components of the motor moved in cycles of different frequencies. To solve this problem Calvin-De Prenes resorted to another trick of the hand animator. Each component was drawn on its own set of cels, the proper number of cels for its cycle. Each image of the motor was then built up by overlaying a composite of cels, each cel with a different component. Since the cels are transparent, and each component was drawn to show through clear areas above it, the entire set gave a composite image showing the entire motor. As the sequence passed from frame to frame, the camera technician replaced each cel of each sequence in 1ts proper order, and the required set of cyclic motions resulted.

It is also possible with some models of the aerial image system to perform the motions of pan, tilt, and zoom on the drawings on the animation table. The motion must be carefully controlled, of course. This is achieved by having the table moved by rack and pinion or screw gears with attached measuring gauges. The camera technician is given a carefully formatted set of instructions giving him a precise program of how the gears are to be set from one image to the next. By this means the hand animator can accomplish motion with a single, still, painted backdrop. Although it is apparently not normally done, it is also possible to rotate the image in the plane of the animation table.

In discussions with the Calvin-De Frenes technicians it became apparent that a great many of the functions carried out by hand on the aerial image animation system might lend themselves readily to automation.

Figure 3.6 shows that there are two ways to get images onto the raw film in the camera. We discussed the first way above, namely, using painted cels on the animation table. The second way is by means of projecting images from the movie projector located under the animation stand, As can be seen in the diagram, the images are projected under the animation table. The front surfaced mirror shown there mounted at 45° angle reflects the image up through the animation table, where a 9½" by 11" image is formed. This image is focused at the plane of the acetate cels, so that the two images can form a composite that is registered on the raw filmstock. The image projected from the projector is a virtual image and is invisible unless a ground glass screen is put at the plane of the animation table. The image is actually formed in space, hence, the name: Aerial image. One of the uses of the aerial image stand is the combination of live-action movies and animated drawings. The latter are drawn directly on cels placed on the animation table of the animation stand over a ground glass screen with the live action movie frames being projected from below, one frame at a time.

The aerial image camera can also be used as an optical printer only. This is the way our color master prints were generated from the computer generated black and white original, as described in Chapter 2.

It can be seen that the technique of storing sets of cyclic images and single still images is a notion borrowed directly from hand animators. The computer, however, adds a number of possibilities. First, the generation of composite images can be done at a much higher rate of speed once the program for their combination has been written. Note that the program for the logic of setting up the composite images must be written in any case, since the exact sequence must be spelled out for the aerial image camera technician. A special case of a cyclic image is the so-called FREEZE FRAME sequence. This is the photographing of one image on the film for a large number of frames. The motion is frozen on the screen. This can be done easily. by the aerial image camera technician, and, of course, is just as simple by means of computer animation techniques. Background scenes can be defined by means of subroutines in a computer animation system and the same effects as the aerial-image camera can be achieved by means of the pan, tilt, and zoom routines already described.

It should be noted that the camera effects described above that can be achieved by means of manipulating the animation stand and aerial image camera are all only two-dimensional effects. Discussions with the Calvin-De Frenes technicians made it clear that these two-dimensional effects are the limit of the abilities of the aerial image animation system: It cannot photograph three-dimensional objects which are thicker than an inch or two, and the background image and animation cels cannot be rotated out of the plane of the animation table. Hence the aerial image system simply does not have the capabilities of the virtual camera defined in the digital computer. Calvin-De Frenes mentioned an example of a need for such a possibility in their own work: They wanted a perspective view of a building, They already had an orthogonal projection drawing of the building, but attempts to photograph it at an oblique angle were unsuccessful, because of the problem of the limited depth of focus of the aerial image camera (although the same limitation would have applied to any real camera), This sort of problem is well suited, of course, to the virtual camera, with perspective transformations in a computer animation system.

4.11 Split Screen Effects

One of the effects required by film directors is the display of two objects photographed or, in our case, defined at different times, each on one half of the screen - to the right and left, or above and below, This is called a split screen effect. With live action photography this is achieved by means of the aerial image system: Raw stock is exposed once to each live action scene with the opposite side of the film masked out by an opaque cell on the animation table. Hence masks and two passes are required. Split screen is thus considered a fairly expensive proposition, With proper scaling and masking by means of the movie generating programming system, the production of split screen effects, using previously-defined objects, is a simple and well defined process, The costs are no greater than the cost of producing other scenes by computer animation. Of course, split screen effects are equally simple to define by hand animation. However, the computer animation technique allows this effect with previously-defined objects, which is a definite advantage over the hand animation technique.

4.12 The Representation of Vector Fields in Animated Films

The original objective of this development project was the production of a set of documentary films on the topic of electromagnetic fields and waves. Let us assume the existence of a programming system for the production of films displaying any mathematically definable objects. Then the next pertinent question is: what objects give the most illuminating visualization of the phenomena being described? In our case, we wished to illustrate the propagation of disturbances through electric and magnetic fields. These are vector fields defined in three dimensions. They are continuous and have a value at every point. Moreover, dynamic electric and magnetic fields have a definite relationship to each other and we therefore wish to display them together, in the proper relationship, So we are led to the requirement of color coding. Finally, we wish to denote the fact that the field has different values at different points. This gives rise to the requirement for coding the intensity of the vectors in some manner.

Naturally, there is no way a continuous field can be denoted by a symbol at every point, Nor is it possible for a human being to visualize such a field that way in any case, A physicist or engineer deals with such phenomena on a point-by-point basis, This is the way measurements and calculations are made and design criteria set up, We shall thus set up our representations to denote the fields only at specific points in space. In the case of two-dimensional fields it is conceivable that the representation of a continuous function could be made by some sort of shading technique and this notion could be extended to the case of three dimensional fields where the fields are so represented along cross sections. We have experimented with this technique on a small scale, as will be described below. In our case, this technique was hampered by the limitations of our equipment. The SC 4020 has a limited latitude of brightness settings- - two to be exact - and the lower setting hardly gives enough exposure on the film to be visible. PIB recommends that normally only the brighter setting be used. Unfortunately, even the brighter setting is not by any means uniform in its results. Sometimes a single line will have more halation around it than at other times, and hence the density of the lines varies noticeably. We have therefore limited our shading to either a simple washing-out of particular areas by drawing many closely spaced lines, or else denoted a continuous shading of an area by discrete dashed line coding. The first technique has already been described in Chapter 2 under the heading The Process of Generating a Filmstrip by Digital Computer. It seems to have little application to coding continuously changing fields in our movies, although we have used it extensively to denote surfaces and solids in space. The dashed line coding was applied to the representation of the electric fields between wires in our fields and waves movie. The details will be explained below. The other methods we tried using to denote varying intensity will also be described; including those that were not useable. Completeness requires that we include with our apparent successes our abyssmal failures as well.

4.13 Coding by Dashed Lines

As stated above, we used dashed lines to denote the continuous fields associated with the motion of voltage waves travelling down a transmission line. The requirement was to show these fields both as a continuous wave, which was a voltage difference between the two wires, and as electric fields between the wires. The continuous voltage conceptualization was shown on a graph on the top part of the screen. The electric field conceptualization was shown as a set of dashed lines, or more precisely, dashed arrows travelling at the phase velocity of the continuous voltage wave. The conducting wires and their associated field were shown at the bottom of the screen. The representation of the fields in this way required a few tries before a clear representation was achieved. The basic method for denoting intensity was straightforward enough: A wave, particularly a sinusoidal wave, travelling down the wire has a maximum intensity. This is denoted by a full arrow, or vector, with no spaces - the brightest possible field line. At other points along the line, where the field is not of maximum intensity, the vector is denoted by a dashed line such that the proportion of line to space is the same as the proportion of that vector's intensity of the maximum intensity. The first representation was made using the dashed line routines available in the Stromberg-Carlson subroutine (SC0RS) program package. These routines start the dashed lines in the same point for all intensities and merely reduce the length of the line portion as the intensity decreases. Unfortunately, when this was used with a sinusoidal voltage wave, the effect was better suited for decorating a Navajo Indian blanket than denoting electric fields. In order to obtain a set of lines that give a clear representation of such fields, we developed a routine that draws one of ten possible lines, depending on the proportional intensity of the field at the given point. The format of these lines was determined so that it would be approximately proportional to the log of the ratio. This rule is closer to the physical way a human eye perceives brightness, which is logarithmic rather than linear. Figure 4.8 shows the actual coding of the arrows used, Figure 4.9 shows a typical frame of film denoting a sinusoidal voltage disturbance on a transmission line. The arrows denoting the continuous field were drawn every l/6 wave length, with the arrows drawn so that they are always in the same phase relationship with the continuous graph of the wave. The phase relationship is such that an arrow is always held at the exact points of the extrema of the wave, plus and minus.

4.14 Blinking for Intensity Coding

Another coding technique that was tried was blinking each arrow at a rate proportional to the required intensity, The highest intensity is denoted by an arrow that appears in every frame. The next highest intensity is denoted by an arrow that is on the screen for 22 out of every 24 frames. The next arrow appears 18 frames out of every 24, and so on with decreasing frequency of appearance for the less intense vectors.

Unfortunately, vectors generated by this method had a very confusing effect on the viewer. The rate of blinking is so low that the effect is not a set of lines, of which some are brighter than others. One actually sees the blinking, and the effect is simply that the arrows appear to be a set of intermittently appearing objects on the screen, We had to label this technique a failure.

Figure 4.8: Coding of Ar1iws Representing the Various Intensities of a Normalized Vector Field (i.e., Highest Intensity Is One)

4.15 Line Count Per Unit Length for Intensity Coding

Another technique for denoting intensity was used in the case of the electric and magnetic fields between two wires of infinite length. We shall treat this case in detail, since it is a particularly illuminating example of the possible ways of visualizing fields in space.

Figure 4.9: Typical Frame of Film Showing Sinusoidal Wave and Intensity Coded Electric Field Lines

First we derive the equations for the field between two infinite line charges.

Ramo-Whinnery (Reference 4.4) shows that the potential function in the neighborhood of two line charges is

Figure 4.10: Cross-Sectional Geometry of the Two Parallel Conducting Wires
Figure 4.11: Determination of the Radius of Curvature and Center of the Arc of an Electric Field Line

entire field. If the first field line is taken as the Y-axis and each subsequent field line drawn to intersect the X-axis at an equal interval of flux on each side of the Y-axis, the resulting picture is given by Figure 4.12. Note particularly that this representation automatically gives us a coding for the intensity of the E field.

Figure 4.12: A Set of Electric Field Lines with Spacing Determined by Equal Flux Between Each Two Lines

The density of the lines is directly proportional to E. As the distance from either wire increases, the spacing of the E lines increases. At any point in space the field intensity can be gauged by measuring the separation between the two nearest lines.

We get an even more interesting result when the voltage between the wires varies. The field lines move, if the same flux spacing between them is maintained. This can be visualized most easily by considering, for example, a small decrease in V0. It can be seen that the line nearest the Y-axis will move away from the center slightly, the next line somewhat more, and so on until the farthest line, which will move a considerable distance farther away from the center. Notice that if the voltage decreases enough, the locus of some of the lines will extend to infinity. This indicates that there is not enough flux from the Y-axis to infinity to show the required number of flux changes. These superfluous E lines are, of course, not drawn. It can be seen that the coding of the field intensity still holds. As the voltage decreases the intensity also decreases and the distance between E lines increases. When the voltage difference between the wires is zero, all field lines have moved to plus or minus infinity - and the field intensity is correspondingly zero.

This sequence of motion was programmed and filmed experimentally. Although the resulting effect was interesting to look at, it was felt by the Project Director, Dr. Rubinoff, that this was not an accurate representation of the physical reality being depicted, Apparently, the field lines around a pair of conducting wires do not actually move as the voltage between them varies, Such a motion would imply that there is a radiation of energy into space from every point along the line, which is not actually the case. It was decided, therefore, that the lines must not move in our representation, Instead, the variation in electric field intensity was depicted by means of a set of static E lines that appeared and then disappeared in an order which most nearly resembled the sequence of change in intensity. Figure 2.7 shows the defining 1mages and the accompanying timing charts, The cycle had a total length of 52 frames. The resulting sequence definitely gave the appearance of a field with lines becoming more and less dense as the voltage difference varied. This is the sequence that ultimately appeared in the Fields and Waves movie, with the E lines shown in the red image.

4.16 Magnetic Field Lines

The magnetic field lines surrounding the two wires posed a similar problem, The magnetic lines can be defined by the lines of equi-potential around each wire. (See Reference 4.4). Hence, we set the potential to a constant
Figure 4.13: Magnetic Field Lines Around Conducting Wires, the Lines Being Determined by Equipotential Curves

The spacing of the lines is found by taking equal increments of potential between each two lines. Figure 4.14 shows a typical set of magnetic field lines as drawn for the fields and waves movie. As in the case of the electric field lines, this spacing automatically provides a coding for the intensity of the magnetic field.

Figure 4.14: A Set of Magnetic Field Lines Around Two Conducting Wires with a Current

The question of dynamic intensity coding was the same with the magnetic field as it was with the electric field. If the potential between the wires changes, the distance between magnetic field lines changes, and it is possible to show variation in field intensity by this means. However, the same arguments apply against using this method here - it does not depict physical reality precisely. Hence, the change in field intensity was depicted in a manner similar to the method used for the electric field. Again, the cycle was 52 frames long.

4.17 Vector Length for Intensity Coding

Another technique for depicting field intensity at a given point in space is the obvious-one of vector length. In the case of three-dimensional fields, this method seemed to be the most illuminating one. As was stated earlier in this chapter, a scientist or engineer usually will only consider a physical quantity at one point, or along a single line, We have expanded this kind of thinking into a method of visualizing a three-dimensional field in a natural way.

The most basic dynamic three-dimensional field in electro-magnetic theory is the field associated with a sinusoidal, plane polarized wave propagating through an infinite homogeneous medium, If one considers a point fixed in space, and measures the electric and magnetic fields at that point, both of these vector quantities will vary sinusoidally at the frequency of the wave. We can make the variation visible if we place a pair of arrows at that point, one that varies in direction and length in response to the electric field, and one that varies in response to the magnetic field. If we put such arrows at equal intervals along a one-dimensional ray that is pointing in the direction of propagation of the wave, we will be able to see the sinusoidal disturbance travelling through the set of arrows.

Figure 4.15 shows a typical perspective view of such a set of arrows. Since color cannot be reproduced here, the electric field vectors are shown in solid lines and the magnetic field vectors in dashed lines.

Figure 4.15: Perspective View of the Representation of a Flame-Polarized Wave in Space

In the Fields and Waves movie the electric field was depicted by a set of red arrows and the magnetic field by a set of blue arrows. The effectiveness of this method of depicting fields in space can be seen particularly in the case of polarized waves. Although this was not fully exploited in this first fields and waves movie, this should be applied to several topics in subsequent movies, which will deal with the various aspects of polarized waves. For example, circularly polarized light would appear as a disturbance that propagates along the ray, with each arrow rotating at the circular frequency, and with a phase difference between it and the next arrow on the ray which is exactly equal to the propagation phase lag. The arrows also show the spatial relationship between the electric field vector and the magnetic field vector. As can be seen from the perspective motion sequences of the fields and waves movie, this is an effective way of helping the student to visualize objects in space. The three-dimensional effect is quite evident.

Other sequences that can be clearly depicted with the vector-ray technique are: quarter-wave polarizing filters, elliptical polarization, the effect of oblique incidence, at a change in dielectric medium, on the polarization of a wave. Hence, the basic mechanism of the waveguide can be introduced and then elaborated.

5. Techniques of Programming: The General Design of the MOVIES Computer Animation System

5.1 The Basic Programming Philosophy - The Use of FORTRAN

As was stated at the beginning of this document, all of the computation for the production of the Fields and Waves movie was done in the FORTRAN IV language on the IBM 360/65 digital computer at the University City Science Center computer installation at 34th and Market Streets in Philadelphia.

Some comments are in order regarding the use of the FORTRAN IV programming language. It should be noted immediately that one over-powering advantage of this language is the sheer fact of its existence. It is a working system on the IBM 360, and it is documented and has an extensive library of mathematical and control routines available. It is supported directly by the skill and resources of the IBM corporation. The development of the MOVIES programming system was therefore an extension of an existing programming language and system, rather than an entirely new language, It was mentioned in the introduction that there are other groups in the MOVIES project that actually are developing new languages for the purpose of more efficiently generating film images. These have not reached completion as of the date of this writing. However, the prospect of having such languages is certainly tempting. One would hope eventually to be able to bridge the gap that now seems to exist between the director and the finished film. The results of our present part of the project should reasonably be expected to provide the basis for the types of language required, which will then be expected to translate the director's commands directly into finished film.

The extensions of FORTRAN developed for this project are a set of subroutines that carry out the various functions of drawing required by the programmer. There are in addition subroutines that allow the programmer to manipulate the images thus generated. These latter subroutines are of particular importance for the economical generation of a lengthy film. As will be seen below, some of our requirements for efficient input-output led to sections of the routines being written in IBM 360 machine code,

It should be remarked that certain of our requirements for general subroutine design were beyond the powers of the FORTRAN IV system presently implemented on the IBM 360. The primitive instruction for graphical generation on the SC 4020 is the straight line segment. Whatever objects the programmer wants to depict must ultimately be expressed in terms of these primitives. Therefore, if we wish to display an object that has an algebraic definition, we must do it by means of a one variable parametric plotting routine, with the object being defined by two (or three) functions of the parameter. This routine must in turn evaluate the functions for increasing values of the parameter from the beginning to final value and connect the points by line segments. For example, let us define an ellipse in two dimensions. From the programmer's point of view, the simplest way to do this would be to define two * functions of the parameter first.

Let us assume that the ellipse will be defined by the following two parametric functions:

FX(P) = XO + A + SIN(P) 
                                               (5.1)
FY(P) = YO + B + COS(P) 

A closed ellipse would be drawn by increasing the parameter P from 0 to 2π. In the case of the SC 4020, the curve would appear smooth if the individual line segments were no more than 20 rasters in length. Hence, the precise increment from one evaluation of the parametric functions to the next would depend on the size of the ellipse. The most general way would be for the programmer to supply DP as an input parameter to the subroutine. It might also be the case that the programmer would like to display only a portion of the figure defined, without connecting the endpoints, so he needs an argument to indicate whether the figure is to be closed or not.

Note: I shall assume here and subsequently that the reader is at least familiar with the basic notation of FORTRAN IV. When discussing programming techniques, all examples will be given in that language. This does not seem to be an unreasonable expectation. I have observed that reports of the demise of FORTRAN as a living language are greatly exaggerated. For the first time in its history, for example, The Communications of the ACM has published an algorithm in FORTRAN instead of Algol (Algorithm 332 in Volume 11/Number 6/ June, 1968), to be exact. Also, Dr. Ken Knowlton of Bell Labs told me that he wants to translate his L0 graphic movie language into FORTRAN before releasing it to the general public. Apparently, the universality of FORTRAN makes its use as a standardized language almost an absolute requirement. I think that the development of special application languages should be in the direction of extending FORTRAN, as we have done, rather than producing entirely new languages, that simply require another learning period.

Hence, for the convenience of the programmer we would like to define a higher-level subroutine which allows easier drawing of algebraically defined objects: DRAW2F (meaning draw two-dimensional function). It would have the arguments: FX(P), FY(P), Pbegin, Pend, DP (delta P), LJOIN. The routine would compute the values of FX(P) and FY(P) for each value of P from Pbegin to Pend by steps of DP, and connect each point. If LJOIN ≠ 0, it would join the first and last points. For example, in order to generate a half-ellipse, the programmer would define the above functions (Equations 5.1), setting A, B, XO, and YO to the appropriate values. He would then execute the call statement:

      CALL DRAW2F(FX,FY,0.,3.142,.05,0)

This would draw a half-ellipse with the ends open. This appears to be as convenient a means of defining a parametric function as is possible in any language, involving only three defining and executing statements.

Unfortunately, the sequence defined above is not acceptable to the FORTRAN IV compiler. The Equations (5.1) are legitimate statements in FORTRAN IV, and define the functions indicated. However, we have been informed by the IBM consultants that the compiler actually uses the function statement as a definition of an open subroutine. It then inserts the entire sequence of coding for the computation of the function wherever the function call occurs in the program. The CALL to subroutine DRAW2F, on the other hand, must be informed that it is to link to functions by means of the EXTERNAL statement followed by the function names. It cannot accept internal functions. There is no mechanism in the FORTRAN language for defining internal functions of the closed type, Instead, we set up a more or less formalized logic for drawing all such parametric functions, This added some complication to the programs and added to the possibility of error, but proved reasonably satisfactory. Figure 5.1 shows the flowchart of the general logic used.

Figure 5.1: Flowchart of the Function Plotting Logic. The parameter names used are explained in the text.

5.2 The Basic Technique of MOVIES Programming on a Mass Scale

Given the basic routines available in the FORTRAN IV library, we have added the routines described in detail in the next section of this chapter. However, it is appropriate here to point out some of the underlying techniques that were used, or at least attempted to be used in writing the actual image checkout and movie generating programs. This was a project in which ten individual programmers contributed on a mass scale to the production of about 120,000 separate frames of original film. In order to maintain control and efficiency, it was attempted as much as possible to provide standardized definitions of objects, documentation, and programming variables. In particular, there is a recommended style of programming that was thought would yield the maximum results; Naturally, with ten very different personalities and levels of experience, there was some variation in style. In addition, it is certainly true that every person has a style to which he is inclined or accustomed. To the extent that he was able to use his own style effectively, this variation was allowed. In some cases, however, it was necessary to give to some people a certain amount of guidance or recommend different ways of programming a given sequence.

It is clear that one of the most important requirements of someone attempting to guide the efforts of a number of programmers is that it be made crystal clear exactly what is expected of each one, and when it is expected to be done. In discussions with persons responsible for directing programming efforts in industry, it was found that their general technique for moving programmers is to set up reasonable schedules, with each task broken into reasonable subtasks. These schedules are agreed upon in conference with the individual programmers and then posted for the duration of the job. This technique was adopted by us for this project, with some variation. Figure 5.2 shows one such schedule, as set up for the Fields and Waves movie.

Production Schedule for Programmer A
(Entries give expected completion dates)
Name of Assigned Filmscript Completed Specifications Programs for Checkout Written Programs for Production Written Checkout Completed Production Completed Film Clips Printed
Scene 13 (sine waves)June 28July 8 July 8 July 10July 12July 15
Scene 14 Part 1July 1 July 8 July 10July 15July 22July 25
Scene 14 Part 2July 1 July 12July 22July 22July 25July 25
Scene 14 Part 3July 1 July 12July 22July 22July 25July 25
Figure 5.2: Example of a Production Schedule
Set Up for One Programmer for the Fields and Waves Movie

It should be noted that the director must subsequently maintain a constant check on the progress of each such schedule, with conferences with each programmer every few days, or more often. As might be expected, the ability to stay with the schedule target dates varied greatly from programmer to programmer. In general, however, if the schedule was reasonable (i.e., allowed almost double the time the job might minimally take) every programmer stayed reasonably within the schedule, if he was suitably coached, persuaded, and cajoled.

The general style of the MOVIES programming system stressed several points:

(1) All time and distance scaling should be standard and automatically set to those standards by a single initializing subroutine (INIT). The programmer is then free to invoke these standards with one call statement and modify only those parameters which he wants different from normal.

These standard values are shown in Figure 5.3. (XC,YC) is the center of the camera. CS is the distance from the center to the edge along either axis. The SC 4020 scaling is set to 1.05/CS. Time is the variable T, and DT is the time increment for each frame.

Figure 5.3: Definition of the Scaling Constants for the Camera Image Plane

Note that it is advisable not to position important objects or action above or below +.9CS or -.9CS respectively, since there is danger of the frame line being cut off in the projector.

In particular, INIT sets the camera parameters to XC=0., YC=0., CS=1.0, DT=1/24, T=0. (time), and sets the x and y scaling as -1.05 to +1.05, The added 5% distance in both directions was added for safety, as will be explained later.

(2) There are sets of system parameters residing in various blocks of labelled COMMON. These determine camera position, rate of time flow, present time, and plotting parameters (print, Calcomp, tape). The programmer knows that these have been set to their standard values by INIT, and he may predicate definitions and motions on that basis.

The labelled COMMON blocks are

      /CAMERA/XC,YC,CS           (the camera parameters in Figure 5.3)
      /TIME/T,DT                 (T=time, DT=time increment per frame)
      /OUTPUT/IPLT,SCENE(100,60) (IPLT=plotting parameter, which is explained in checkout
                                  procedure section, SCENE is the buffer for  the printer plots)

(3) The style of programming was along the lines of the GROWING MACHINE (see Reference 5.1). This requires explanation. Graphics programming in general depends on the definition of entities, which are sets of lines, and then their manipulation in time and space. FORTRAN provides a very flexible method of defining objects which are described algebraically and have explicit space and time dependence. Therefore each movie generating program consisted of a preamble set of subroutines, each defining a single entity by means of CALL LINEL(X1,Y1,X2,Y2) statements. This was followed by a main program that calls those entities that it wants to appear in each given part of the scene being generated. For example, we write a program to generate a fixed axis with a circle rolling along it that moves from left to right in front of the camera. (See Figure 5.4).

Figure 5.4

The object definition is:

      SUBROUTINE WHEEL
      COMMON/TIME/T
      T2 = T * 1.2
      XW = 1.5 + .5*T
      YW = .5
C     DRAW CIRCLE
      PI2 = 3.14159265*2
      P = 0
      X2 = XW + .4
      Y2 = YW
  20  X1 = X2
      Y1 = Y2
      P = P + 1/10
      X2 = XW + .4*COS(P)
      Y2 = YW + .4*SIN(P)
      CALL LINEL(X1,Y1,X2,Y2)
      IF(P.LT.PI2) GO TO 20
C     DRAW SPOKES
      CALL LINEL(XW-.4*COS(T2).YW+.4*SIN(T2),XW+.4*COS(T2),YW-.4*SIN(T2))
      CALL LINEL(XW-.4*SIN(T2).YW+.4*COS(T2),XW+.4*SIN(T2),YW-.4*COS(T2))
      RETURN
      END
      SUBROUTINE AXIS
      CALL LINEL(0,.5,5.,.5)
      CALL LINEL(0,.4,5.,.4)
C     DRAW CROSS HATCH OF WIRE
      X = 0
  10  CONTINUE
      CALL LINEL(X,.4,X+.1,.5)
      X = X+.1
      IF(X.GT.5,) RETURN
      GO TO 10
      END

We can now run the program from time= 0 to 4 seconds, which will cause the wheel to roll from x=1.5 to x=3.5, all the way across the picture. Our initializing program will set the camera to XC = YC = 0. CS = 1., and we must first place the camera at XC = 2.5 explicitly before starting. Subroutine FRAME will take care of incrementing time, so that we need only test for the end time. Subroutine FRAME has the primary function of advancing the output device to the next frame of film, paper, or whatever. An exact description 1s given in the next section.

C     MAIN PROGRAM TO ROLL WHEEL
      COMMON/OUTPUT/IPLT
      IPLT = 1
C     INITIALIZE SYSTEM
      CALL INIT
C     RESET CAMERA POSITION
      XC = 2.5
  10  CONTINUE
      CALL WHEEL
      CALL AXIS
      CALL FRAME
      IF(T.LT.4.) GO TO 10

If we wish to start the camera moving along with the wheel, say when T = 2., we rewrite the program:

      CALL INIT
  10  CONTINUE
      CALL WHEEL
      CALL AXIS
      CALL FRAME
      IF(T.LE.2.) GO TO 10
      CALL PAN(2.5,3.5,2.)
      IF(T.LE.4.) GO TO 10

PAN is fully described in the next section.)

This causes the camera to start moving in the plus x direction after T = 2. at velocity .5. In order to pan to the left, the call would be CALL PAN(2.5,1.5,2.), for example,

If the programmer wished to monitor the motion of the camera, he could access the camera parameters by means of a COMMON/CAMERA/XC, YC,CS statement.

Again, the main idea was to stress that the movies programming system should be convenient for standard definitions and manipulations. At the same time, by means of labelled common blocks, the system should be highly flexible for monitoring and changing variables as needed. Note also that since the camera picture limits are available, a subroutine can be made more efficient, for example, by drawing a function only between the limits of the picture, (This would save time, since, although LINEL would not actually draw the lines outside the picture, it takes some time in checking these limits.)

Fairings: It will be remembered from the previous chapter that in animation, when an object is at rest and must then be moved on the screen, it is normal to accelerate it smoothly to some fixed velocity, and then decelerate smoothly when it has arrived at its destination. This is called fairing, and is used to avoid confusing, jerky motion. For our purposes, objects generally were accelerated and decelerated over 16 frames. The velocity looked like this:

This process was taken care of for camera motion in the PAN, TILT, and ZOOM routines, but the programmer had to provide for it in programs drawing the object definitions for individual entities. The use of fairings means that an object will lag 8 frames behind the position it would have if it started at full velocity. The programmer must take this into account in reckoning positions of moving objects,

The general flowchart of a movie generating program is as follows

5.3 Documentation of the Programming System Developed for MOVIES Project

5.3.1 The SCORS Package - The Basic SC 4020 Software Package

Organizations that are users of the SC 4020 have supplied to them a set of programs entitled the SCORS package. These routines provide the means of generating SC 4020 instructions for line and frame manipulations by means of simple subroutine calls, The package also provides a set of scaling routines, character generating routines, and some graph-axis generating routines. The entire package is documented for the user in the SC 4020 User's Manual, published by Stromberg-Carlson. The particular SCORS package used by the MOVIES project on the IBM 360/65 is written in FORTRAN IV, for the most part. It has a number of discrepancies from the User's Manual, but it is accurate enough to be usable. We shall include here the changes that affect the programmer, Therefore, the reader is referred to this manual, and it will be assumed henceforth that he is familiar with this document. For completeness, however, we include here a description of the most basic routines, since so much of our work was based on these, or depended on using them in a particular way.

These descriptions of XSCALV, YSCALV, NXV, NYV, and LINEV are taken from the SC 4020 User's Manual.

Basic Scaling Subprograms: XSCALV, YSCALV

XSCALV, YSCALV will compute the scale factors for a specified display and store them in an internal table for later use by those functions which convert data. The calling statements are:

      CALL XSCALV (XL, XR, ML, MR) 
      CALL YSCALV (YB, YT, MB, Mr) 

XL, XR Floating point values of X for the leftmost and rightmost limits of the scaled plotting area.

ML, MR The amount of margin space to be reserved to the left and right of the scaled area, expressed in raster counts (fixed point integers).

YB, YT Floating point values of Y for the bottom and top limits of the scaled plotting area.

MB, MT The amount of margin space to be reserved below and above the scaled area, expressed in raster counts (fixed point integers).

Example

Figure 5.5 illustrates the relationship of the arguments. The margin specifications are: ML = 170, MR -192, MB = 340, MT = 128.

Figure 5.5

XSCALV wi11 assign XL to raster location IX= 170, and XR to raster location IX + 831 (i.e., 1023 - 192). YSCALV wi11 assign YB to raster location IY = 340, and YT to raster location IY + 895 (i.e., 1023 - 128). The scaled area will then be the rectangle from IX = 170 to IX = 831, and from IY = 340 to IY = 895.

Conversion of Data: NXV,NYV

Two function subprograms, NXV, NYV, are provided to convert data coordinates into raster coordinates, The argument for each of the functions must be a floating point quantity; the result will be an integer quantity.

The following FORTRAN statements show how these functions may be used to convert data coordinates X (or Y) into raster coordinates IX (or IY):

      IX=NXV(X)
      IY=NYV(Y)

The functions NXV and NYV check for off-scale data values. The result IX (or IY) will be set to zero if the argument X (or Y) is outside the limits that were used to establish the scale. Notice that NXV and NYV have an implicit danger, since any line that goes out of the scaled area will be drawn from its inside point to zero, instead of to the required point: Figure 5.6 shows an example of such an error.

Figure 5.6

In order to guard against this danger, the standard scaling for the MOVIES software provided a scaling safety margin of 5% beyond the camera limits. In other words, INIT set up the scaling by means of the statements:

      CY=CS*1.05
      CALL XSCALV(-CT,CT,0,0)
      CALL YSCALV(-CT,CT,0,0)

Then the camera windowing subroutine LINEL (to be described in the next two sections) limited all lines to being within the CS limits, and scaling errors were avoided,

Line Generation LINEV

It should be noted that LINEV and NXV and NYV were seldom used directly. However, their functions should be clearly understood in order to allow the reader to understand the MOVIES routines that follow.

LINEV connects two points by a straight line composed of vectors, joined end-to-end. The arguments for LINEV, which specify the points to be connected, must be given in raster counts. As described above, the programmer may connect two data points by a line if he first uses the functions NXV and NYV to convert the data coordinates into raster coordinates. (If there is a possibility that the data points being converted may be off-scale, the conversion results should be tested for errors before LINEV is executed.) The calling statement is:

      CALL LINEV(IX1, IY1, IX2, IY2)
  IX1, IY1 Raster coordinates of one end point
  IX2, IY2 Raster coordinates of the other end point

In LINEV, a floating point data value may be utilized, if scaling has been established, by utilizing the function subprograms NXV, NYV as follows:

      CALL LINEV (NXV(X1),NYV(Y1),NXV(X2),NYV(Y2) )

Figure 5.7 is an example of the use of LINEV.

Figure 5.7
Windowed Line Generation: LINEL

LINEL connects two points by a straight line composed of vectors, joined end-to-end. It assumes that the parameters (C,YC) (coordinates of the camera center) and CS (camera scale) have. been set in the labelled COMMON block/CAMERA/XC,YC,CS. LINEL actually only draws the part of the line appearing within the limits ±CS about the camera center (XC,YC). If the entire line is outside the window, nothing is drawn. If only some middle part of the line is within the window, or some end of it, that portion will be drawn. Examples:

The calling statement is:

      CALL LINEL(X1,Y1,X2,Y2)
 X1,Y1 Floating point values of the starting X and Y coordinates
 X2,Y2 Floating point values of the end X and Y coordinate

Once LINEL has determined what part of the line lies within the window, it sets these limited end point names to XA,YA and XB,YB and executes a CALL LINEP(NXV(XA),NYV(YA),NXV(XB),NYV(YB)).

Subroutine LINEP, which will be described in detail in the section on checkout procedures, ultimately executes a CALL LINEV statement, that actually generates the SC 4020 line drawing instructions.

The System for Writing SC 4020 Instructions on to Tape: PLOTDD

Dan Callahan is responsible for the programming of all the entries to PLOTDD designed at the Moore School; Odile de la Beaujardiere assisted with programming the SAVE system entries, which will be described shortly.

The SCORS package delivered to the MOVIES project had a subroutine that packed the SC 4020 instructions into a buffer as they were generated and wrote them onto tape by means of FORTRAN I/0 after the buffer was filled. From the very beginning of the mass production of images, it became clear that this method of output was too slow, so a fast buffering system was written.

The fast I/0 system was based on a package written by Paul Licker, a Moore School student working at the Science Center. The package is called FASTBUF, and allows the user to set up any number of buffers of equal size in core storage. FASTBUF is used by PLOTDD.

PLOTDD is called with the name of a two-word array which contains SC 4020 instruction. The routine expects the first six bits right-justified in the first element of the array, and the remaining thirty bits right-justified in the second word.

Because of the disparity between word-lengths in the 4020 and the 360, SC 4020 instructions are stored in blocks of 8 in software registers of 9 360-words. As instructions are generated and delivered to PLOTDD, they are packed into a register XPAD. The packing algorithm is that which was in the original SCORS PLOTDD. If a SAVE switch (see below) has not been set, XPAD is automatically dumped into the tape output buffer when full, then cleared to zero.

The output buffer is a 720 by 5 array equivalenced to 5 linear 720 element arrays (an element in an array always means a four-byte word). A switch is set to load the five buffers successively, and to dump each of them as soon as full. When a buffer is dumped (by a CALL BUFFER(BUF,IM1)), the next buffer in line is checked for output activity (by a CALL CHECK(BUFNM2)). If there is activity, the system holds until it is completed. It can be easily seen that each buffer consists of 80 registers for SC 4020 instructions. Hence the tape records are 640 SC 4020 words in length.

During the packing of instructions in XPAD, an instruction count is continually updated in FLAGS(25), which can be accessed by a CALL FLAGSV(IWDNM,-25). IWDNM will contain the instruction count. In similar fashion, the number of records dumped from the output buffers (2880 bytes in length) can be accessed by CALL FLAGSV(IRCNM, -24). (See the SCORS Users Manual for a detailed description of FLAGSV.)

More buffers will increase the speed of the output operation, up to some limit, By experiment, we found that performance did not improve noticeably for more than five buffers, and this was the number incorporated into our output package. Since that data was packed directly into the output buffers, no time was lost in transferring data from the packing buffer to the output buffers.

With the FASTBUF system included in PLOTDD, the system was able to write at a rate of about 9,000 SC 4020 words per second, which is apparently the maximum possible rate in the IBM 360/65 computer system, since the addition of extra buffers did not improve this figure. This was certainly close enough to maximum to be satisfactory, and this is the present implementation being used.

The SAVE System

The basic idea for the SAVE system originated at the Polytechnic Institute of Brooklyn, although our version was designed and written entirely at the University of Pennsylvania.

The ability of the MOVIES package to produce images on a mass scale at low cost is based primarily on the SAVE subroutine and its ancillary routines. These include disk reading and writing routines, and an overlaying routine. The subroutines described below are all entries to the major subroutine PLOTDD. The SAVE entry changes a set of switches that direct where the SC 4020 instructions being generated are to be stored. In normal operation, they are stored as described in the previous section, that is, in the output buffers. When SAVE is called with the proper parameters, however, subsequent entries to PLOTDD store the SC 4020 instructions into an array called STORE in a section reserved for the particular image being generated. The images are referred to by the programmer with integer number names, with any number between 1 and 100 allowed. The routines operate in the following way:

CALL SAVE(N)
sets up an image area labelled N in storage; all subsequent SC 4020 instructions generated by SCORS routines will be packed into image N, until a call to SAVE with zero argument occurs (at which time the block N is filled out to an even number of nine 360 four-byte words).
CALL SAVE(0)
closes the present image area; all subsequent SC 4020 instructions go directly into output buffer.
CALL SAVE(-N)
transfers the block of SC 4020 instructions that make up image N to the tape output buffers.

The implementation of SAVE uses: a save area, an array called STORE which consists of 2000 registers (a 9 by 2000 element matrix); a pointer to a register in STORE; and the save table", a 2 by 100 array called, simply enough, TABLE (all in labelled COMMON: PLOTER for use by PLTSAV).

When the program issues a CALL SAVE(K), with K positive, the following occurs:

  1. K is checked for legitimate value (1 < K < 100)
  2. The save table is zeroed if this is the first entry to SAVE
  3. XPAD is checked for instructions. If XPAD is not empty, it is padded with STOP TYPE instructions (op code is octal 12) and dumped into the output buffer.
  4. A save switch is set so that the next dump of XPAD will be into the save area register being pointed to and not into the output buffers.
  5. The current value of the pointer to STORE is placed in TABLE(1,K).

Succeeding SC 4020 instructions are now placed into the save area pointed to by the table. If the STORE pointer exceeds 2000, a message is printed and the job terminated. With this error termination, and with all other abnormal terminations generated within PLOTDD, XPAD and the current output buffer are both padded with STOP TYPE instructions and dumped before termination.

When the program issues a CALL SAVE(0):

  1. XPAD is padded out and dumped into the save area.
  2. The switch set by the previous SAVE call is reset so that succeeding instructions are again dumped into the output buffer.
  3. The last value of the pointer to STORE is placed 1n TABLE (2,K) and then the pointer is moved up to the next open register.

When the program issues a CALL SAVE(K), with K negative:

  1. K is checked for legitimate value, that is, -1 > K > -100. If less than -100, the job is terminated.
  2. TABLE(1,-K) is checked for zero. If it is zero, the area has not yet been defined and the job is terminated.
  3. TABLE (2,-K) is compared to TABLE (1, -K). If less, the area has been defined and closed, but no instructions were ever placed in it. This also terminates the job.
  4. If the three checks above are satisfied, the save area K is dumped into the output buffer

A call to PLOTND, with any argument (the argument is 1gnored - it is a remnant of the original routine) results in XPAD being padded out and dumped, snd the current output buffer being padded and dumped. A CALL EMPTY holds up the system until all output activity is complete, writes an end-of-file, snd closes the data set.

Since a programmer may need more total area than contained in STORE, but not need it all at the same time, provision is made for overlaying old images with new. A CALL OVRLAY(K,ISAVE) will reset the STORE pointer to the first register of save area K, and place the original value of the pointer (which is pointing to the next free space in the save area) in the location ISAVE. A check is made first on the value of K: if not between (or equal to) 1 and 100, the job is terminated. A CALL RESTOR(ISAVE) resets the pointer to the value in ISAVE. A check is made first that ISAVE 1s between 1 and 2000.

The present SAVE routine has an array for saving images that hold 16,000 SC 4020 instructions (which is 18,000 four-byte 360 words). Since the average size of a frame of film is about 1,000 SC 4020 words, this means that the array can hold at most 16 different full frames. In fact, we discovered very early that this was insufficient for many of our cyclic sequences. On the other hand, a 20 cylinder disk device can hold about 300,000 SC 4020 words, or 300 images. We turned again to Paul Licker, who has designed a set of routines for reading and writing blocks of data onto the disk, where each block is identified by an eight-byte symbolic name.

The following routines allow permanent storage of images on disk:

CALL COR2DS (N,M)
transfers the image N instructions to the disk file 1abelled M. Note that M may be any unique 8 byte word; hence, it may be either a number or an eight character BCD name. In general, it seems a good idea to assign symbolic names to widely used images, such as 'METRWIRE' (e.g., meter and wires), while setting aside a set of numerals N000 to N999 for each programmer's private use, N = programmer 1 to M.
N.B,: If M is an 8 character BCD name, use a HOLLERITH name, padding, if necessary, to 8 characters; i.e., CALL COR2DS(10,'CRAZY ') If M is a number, define it as REAL*8; i.e.,
      REAL*8 INSANE 
      INSANE=37 
      CALL COR2DS(14,INSANE)
Note: If a programmer calls COR2DS(N,M) and if M has already been defined, the subroutine simply enters the new image under the name M; the old image is always automatically deleted when the new image is stored on disk, and the old image is removed in the next garbage collection.
CALL DS2COR(M,N)
transfers the disk file labelled M into the SAVE image area N.
CALL OVRLAY (N, ISAVE)
causes SAVE or DS2COR to insert next image into the area previously occupied by image N, and thus makes N available for re-use. Note that this routine simply resets the pointer of next-word-to-be-stored to the beginning of image N. The purpose of ISAVE is to let the programmer be able to restore the pointer to its original value (where it will be storing into new territory again). The pointer can be restored by means of the subroutine RESTOR.
CALL RESTOR(ISAVE)
restores the pointer so that subsequent SAVE'd images will again be stored in new (unused) areas of the SAVE array.

Two entries in PLOTDD provide for storage and retrieval of images on disk. They make use of the on-line storage routines MOLDS, GOLDS, and HOLDS, all written by Paul Licker of the computer center (who also wrote FASTBUF which contains BUFFER, CHECK, and EMPTY).

MOLDS must be called only once during a job. Its parameters are: BLOCK, a 25 integer*2 array; 'PICT2I45', the name of the image-containing data set on disk; and 10, the numbers of cylinders allocated to PICT2145. This data set was allocated once and contains 18 cylinders, but since a fourth parameter, '+', is not used, no reallocation is done. A quirk of MOLDS 1s that it permits a maximum of 10 as the third parameter. The above results in a message in the HASP SYSTEM L LOG: 'NO '+' GIVEN FOR REALLOCATION," which message can be ignored. A member of PICT2145 can be created by writing, or read from, after a CALL GOLDS(BLOCK,N,ZNAME), where ZNAME is an 8-byte name of the member and N is the number of a FORTRAN logical unit used to read or write the member. (GOLDS will insert this number into the //GO.FT00F001 ... information provided in the JCL required for use of the on-line storage.) After writing or reading a member, a CALL HOLDS (BLOCK) 1s required. The reading and/or writing is done without format, and the first word read or written must be the length (in four-byte words) of the record which constitutes the member.

When the program issues a CALL COR2DS (K,ZNAME):

  1. WLDS is called if it has not yet been called.
  2. ZNAME is passed to GOLDS.
  3. COLDS is called.
  4. Save area K is written on disk, after checking for a validly defined area. If the area has not been defined, the job is terminated.
  5. A message is written out giving save area number of the image written on disk, the name given the image (contents of the 8-byte location ZNAME), the length of the image in 360 words, and the current percent utilization of PICT2145.

When the program issues a CALL DS2COR(ZNAME,K):

  1. MOLDS is called, if it has not yet been called.
  2. The save table is zeroed if this is the first call to DS2COR and if SAVE has not yet been called.
  3. The image with name ZNAME is read into an 1800 word buffer, TEMP. This is required since the first thing read is the length of the record, and the record must be read in the same statement. If the record overflows the buffer, the job is terminated.
  4. The image is transferred into the save area, and TABLE(1,K) and TABLE(2,K) are assigned the values of the pointer at the first and last register of the area. If the value of TABLE(2,K) exceeds 2000, the job is terminated.

Both the stored images and the MOVIE program package were stored on 20 cylinders of a disk at the Science Center. As work progressed on the project, a number of images, parts of images, and various image generating subroutines were accumulated. There is no doubt that because of this, later parts of the Fields and Waves movie were produced in a shorter time and at lower cost than earlier parts.

The camera routines PAN, TILT, and ZOOM (written by Philippe Dumont )

CALL PAN (X1,X2,TN)
X1 is the starting value of XC (For example, X1=0. if INIT has been called before)
x2 is the ending value of XC
TN is the time of travelling (in seconds)

If X2 is greater than X1, it is a right movement of the camera. If it is less than X1, there is a left movement.

The subroutine does the fairing and the programmer doesn't have to take care of it, but he must remember that TN is to be at least 1.33 seconds, since the starting and ending fairings take 32 frames (32/24 = 1.33 seconds).

On the other hand, this subroutine doesn't allow changes in the velocity of the travelling while panning is being executed. Finally, the time TN, which is the travelling time (fairings included), is independent of the general time T used in the main program. Let us consider the following examples.

Example 1.
      CALL INIT 
      CALL PAN(0.,1.,1.5) 
      CALL FIGURE 
      CALL FRAME 
      IF(T.LE.1.7) GO TO 10 

Starting with T = 0. (INIT), the center of the camera is moved to the right during TN= 1.5 until it reaches the ending X-value 1.0. Then after l.5 seconds the loop is still executed but during 0.2 seconds there is no movement of the camera. The IF statement could have been IF (T.LE.1.5) GO To 10

Example 2.
 ....
      TP= T + 2. 
  10  CALL FIGURE
      CALL FRAME
      CALL PAN(1.5,1.5,2.)
      IF(T.LE.TP) GO TO 10
 ....
 

This could be placed in the middle of a program and would execute a left PAN from XC = 1.5 to XC = 1.2 during 2 seconds. Before the statement TP = T + 2., the current value of XC has to be XC = 1.5.

In example 1 we called PAN before FRAME which means that at T = 0. the travelling already starts. In example 2, the camera does not move in the first frame (frame number 0). In any case, after the time TN, the travelling is terminated and the camera will not move unless there is a new call to PAN.

CALL TILT(Y1,Y2,TN)
Y1 starting ordinate (starting value of YC)
Y2 ending ordinate
TN travelling time
Y1 > Y2 down travelling
Y2 > Y1 up travelling

Same remarks as in PAN.

CALL ZOOM(SC1,SC2,TN)
SCl current scale of the object before calling ZOOM (SC = 1., if INIT has been called before).
SC2 scale of the object that the programmer wants to obtain
TN zooming time.

If the programmer wants an object twice bigger he can write ZOOM(1.,1./2.,TN).

From that scale he could get a reduction to ½ of the dimension of his actual object, writing ZOOM(1./2.,2.,TN). Same remarks as in PAN.

The use of the printer (IPLT = -1) will print, with each call to PAN, TILT, or ZOOM, the following parameters: TIME, VELOCITY, X-value (or Y-coordinate or scale), and FRAME.

The use of the parameter IPLT will be described in detail in the next section on checkout.

Dashed Line Subroutines DOTLIN and YDASH (written by Frank Manola)

Subroutine DOTLIN performs the functions of the subroutine DOTLNV described in the SC 4020 manual, which is not implemented in the SCORS package. Calling sequence:

CALL DOTLIN (X1,Y1,X2,Y2,IL,IS)
(X1,Y1) is the starting point
(X2,Y2) is the final point
IL is the desired length of line in raster counts (integer value)
IS is the desired length of space in raster counts (integer value)

Notice that this differs from the routine described in the SC 4020 manual in two ways: the line coordinates are real values, and there are no default values for IL and IS - they must be given explicitly in each call.

Subroutine YDASH is designed to facilitate the drawing of dashed lines representing electromagnetic fields between parallel conductors as given in MOVIES object definitions. These dashed lines are vertical, with an arrowhead on one end. Calling sequence:

CALL YDASH(X,Y1,Y2,YFUNCT,AMAX)
X is the horizontal coordinate of the vertical line
Y1 is the upper end of the line
Y2 is the lower end of the line
YFUNCT is the current value of a function whose generated field is being portrayed
AMAX is the maximum value (in magnitude) of the above function

In attempting to show the changing magnitude and direction of electromagnetic fields due to a varying voltage, lines of varying intensities must be used. These are simulated on the SC 4020 by dashed lines with varying line and space lengths. Ten line and space lengths have been selected for this purpose. YDASH, when given a function value, finds what percentage this is of the maximum function value, selects the appropriate line-space combination to represent this percentage, and calls DOTLIN to draw the appropriate dashed line.

In the extreme cases, YDASH produces a solid line for function values equal to the maximum, and no line for function values of zero.

Example: Given the function Y=.2*SIN(W*(T-(X-X0)/V)) a part1al routine to draw the dashed E-lines for this function might be

      ...
      X=X0 
      Y=.2*SIN(W*(T-X0-X0)/V)) 
  19  CONTINUE 
      CALL YDASH(X,-.4,-.7,Y,.2) 
      X=X+DX
      Y=.2SIN(W*(T-(X-X0)/v)) 
      IF(X.LE.XEND) GO TO 19 

where -.4 and -.7 are the Y coordinates of the two parallel conductors which produce the field, and DX and XEND are computed elsewhere.

Notice that YDASH calls DOTLIN, so that users of YDASH must also have DOTLIN. The actual lines drawn by YDASH are shown in Figure 5.8.

Figure 5.8: Example of the Printer-Plot Output Used for First-Run Checkout
5.3.2 The Checkout System

The greatest part of the computer expense in generating the Fields and Waves movie was unquestionably in checking out the programs prior to production. This is not really surprising since this is usually the case with programs that are only to be run once, after they are working perfectly. It was because such bugs were inevitable, that our checkout system was devised.

It should be noted that one of the major problems facing us at the beginning of the project was the fact that we had no means of checking directly whether the IBM 360 was actually producing the images expected from a given program. The only way was to generate the SC 4020 instructions onto tape, and mail the tape to Brooklyn Polytechnic Institute to have it printed on film. The ordinary turnaround time for such a process, once the tape had been placed in the mails, was a minimum of two weeks, and often three or even more. It was clear from the start that a method of producing the images locally, and quickly was an absolute necessity if the movie was to be completed in a finite amount of time. (This was no exaggeration: I talked with two graduate students from John Hopkins, who were working on a movie in much the same way, and who had to mail their tapes to Boston. They had been working on the same film for two full years, and had sill not cone anywhere near completing it.)

Since there was no SC 4020 in the area that was available, some other means had to be used. The most direct way was some sort of plot printed out directly from the checkout program itself. The subroutine designed for this purpose became the present LINEP, with an adjunct subroutine FRAME. The programmer can direct LINEP to produce either printer plot or SC 4020 instructions, or both, by means of a switch called IPLT in a labelled block of C0MMON/OUTPUT/IPLT. (The exact coding of IPLT will be given shortly.) The function of subroutine FRAME is to set up an area of storage called SCENE, dimensioned 100 by 60, with blank characters in each location. Then each time LINEP is called, it sets up a row of asterisks (*) in SCENE along the line determined by the arguments of LINEP. The next time FRAME is called by the programmer, it prints out the contents of SCENE, in 60 lines of 100 characters each. This combination yields a square image on the printer. The result is a plot of the picture that will appear on the frame of film printed by the SC 4020. Figure 5.8 shows a typical output of the printer-plot.

It can be seen that the printer-plot is not nearly as accurate as the SC 4020 itself. A great deal of detail is lost, and this is often the very detail that makes a difference between output that is acceptable and output that is not. Thus, though the printer-plot is considerably better than nothing, and relatively fast and inexpensive, a better output device was needed. We were fortunate that about the time the movies project began its mass production of the Fields and Waves movie, the University City Science Center installed a Calcomp plotter, and made software for is available on the IBM 360. This plotter turned out to be an accurate, though moderately expensive and slow method of getting precise plots of our graphic output. It has been the workhorse for detailed checkout of our routines.

The basic software for the Calcomp plotter was set up at the Science Center installation by Mr. Howard Lev, based on routines supplied by Calcomp. It includes initializing, line drawing, and tape output routines, all in a disk file called PLTLIB. As they stood, these routines were not convenient to use in conjunction with the MOVIES routines. Therefore, a higher level set was added for our project. Programming for the Calcomp output option was done by Larry Lieberman. The Calcomp equivalent of a frame is a 10.24 inch square centered on the 30" wide Calcomp paper, with 5 inches between frames. The size of the square corresponds to the 1024 raster counts of the SC 4020.

Since the Calcomp is accurate to .01 inches, this gives an accuracy as good as the SC 4020, The calling sequence for LINEP is

      CALL LINEP(IX1,IY1,IX2,IY2) 

where IX1,IY1 and IX2,IY2 are the raster coordinates of the beginning and endpoints of the line to be drawn. LINEP checks the value of IPLT, which is in labelled COMMON/OUTPUT/IPLT,SCENE(100,60), and causes the line to be drawn on the appropriate output device: printer, Calcomp, or SC 4020. For the SC 4020 output, it executes a CALL LINEV statement.

The user can go to a new frame of output - on any of the three devices - by executing a CALL FRAME, In the case of the SC 4020, it executes a CALL FRAMEV(3). In the case of the printer, it prints the contents of SCENE, and then erases SCENE by storing blanks into it, and thus making it ready for the next image. In the case of the Calcomp, it draws a l0.24" square box around the present image, and spaces the plotter 5" to the next frame. In addition, subroutine FRAME automatically adds DT (delta time) to T (time) in the common block/TIME/T,DT.

The user has three basic responsibilities if he wants to run a plotter job:

(1) Selection of IPLT. (IPLT is the first variable in the COMMON block labelled OUTPUT.) The options are

a.  Printer only 
b.  SC 4020 only 
c.  a and b 
d.  Calcomp only 
e.  b and d 

The user must set IPLT to one of the above values before the first call to LINEP or FRAME (or anything calling these two routines). IPLT can be changed (and thus used as switch) throughout the program.

(2) Plotter Initialization and Termination. IE IPLT has been set to (d) or (e), the program must have the statement, CALL INIPLT, before any calls to LINEP or FRAME (or anything calling LINEP or FRAME). INIPLT initializes the plotting software and opens the plot buffer. And, at the end of every Calcomp plotting program there must be a CALL ENDPLT, which dumps the plot buffer onto tape. No Calcomp plotting can be done in a job after ENDPLT is called. This means IPLT cannot be set to (d) or (e) after a CALL ENDPLT. INIT puts blanks into the SCENE array for the printer-plot.

(3) Job Setup, The proper control cards must be included in the deck for execution. These are described in detail in the appendix on IBM 360 JOB Card Setup.

5.3.3 The General Method of Checkout

We shall describe here the general procedures followed by programmers producing lengthy sequences of images for the movies project. We have learned a great deal about what procedures seem to work, and what are the pitfalls to be avoided. In addition, we shall outline here, and in the chapter on conclusions, those areas that appear to need further development in order to make the checkout and production procedures more effective and efficient. There seems to be little doubt that the development of extremely low-cost methods of producing movies will depend a great deal on increasing the effectiveness and simplicity of checkout procedures.

Briefly stated, the problem of checkout for computer animation is to be sure that the computer is actually producing exactly the required image for every frame of film generated. Our production procedure was generally carried out in two steps. First, the programmer would write a set of subroutines or segments of his production program. He would then run each subroutine, and each segment, as a batch job at the computer center, with the IPLT switch set to produce enough frames of output to verify that the routines were generating the desired images. In the case of still frames and many sequences that involved short cycles (say of a dozen or fewer images), this involved the production of Calcomp or printer output that was not prohibitive in cost. In the case of longer cyclic sequences or of sequences that did not involve repetitive images, but rather a long sequence of different ones, there was no way of being absolutely sure the entire sequence would be correct, short of producing every frame on Calcomp. The general procedure, therefore, was to make the objects that formed the images time dependent, and then to produce sample images for various instants of time. When these samples appear to be correct, the programmer rewrites the program to proceed from the beginning to the end of the required time in steps of 1/24 of a second, thus producing every frame of film required.

In theory, this procedure should work reasonably well, and, indeed, it has in many cases during the production of the Fields and Waves movie. Unfortunately, there are two major pitfalls that we have only partially got around.

The first is that even with the sample frames the programmer obtains on the printer plots and Calcomp, he is still not guaranteed that the film will be absolutely correct. There were cases where there was a bug lurking in a loop of the program that happened not to be entered during checkout. This might spoil just one part of an otherwise good production run. Fortunately, it was discovered that one advantage of using 35 mm film for production is that it is possible to splice corrections into the original filmstrips without damaging the quality of the master color print. There were several cases in which we simply produced an extra strip of film with a correction of the bad part of the sequence and spliced it in before sending the strips to Calvin-De Frenes for printing. Nevertheless, there were also cases where the sequence was found to be so incorrect that the computer production had to be repeated entirely. One example of this was the three-dimensional sequence showing a perspective view of E-field lines travelling on an infinite transmission line. Due to a misunderstanding between the director and the programmer, the sinusoidal wave was defined to be moving in the wrong direction. This particular error was not discovered until the sequence was projected from an experimental preliminary 16 mm strip. (The disadvantage of 35 mm film is that the originals cannot be seen in projection without hiring a studio, projector and professional 35 mm projectionist - all this at considerable expense and risk to the original films themselves).

The second pitfall lies in the fact that the programmer must essentially reprogram in order to change his set of checkout segments into one long production program. This inevitably involves the usual risks of programming errors that arise whenever a program is rewritten. This did happen with a fair degree of frequency. In such cases, as stated above, it sometimes happened that these errors were not detected until the film itself was printed.

One reasonably effective technique for checking to see that the tapes produced contained the desired images was to sample several of the images by means of a subroutine called MOVPRT. In addition, programmers could check the contents of save areas.

      CALL MOVPRT 

MOVPRT prints out a printer plot of frames which have been produced on tape or scratch disk. It reads cards with an integer giving the frame to be printed out punched in the first four columns. Any number of frames may be thus printed out. The last card in the set of cards giving frame numbers must have a '9999' punched in the first four columns as an end marker. Cards containing frame numbers must be in sequence, lowest number first. The calling statement is:

CALL MOVPRT(I)
I=0 is used if frames were produced during present program
I=1 is used if frames were produced at some previous time

The cards must be included as data cards in the run deck, and are read in by MOVPRT when it is executed.

CALL PLTSV

PLTSAV is an entry in subroutine MOVPRT which allows a printer plot of the image stored in a particular save area to be printed out. The calling statement is:

CALL PLTSAV(I)
I is save area number to be printed out (integer value)

It was adopted as a general rule that every tape produced should be sampled in this way before being sent to Brooklyn Polytechnic for printing, and errors were occasionally detected in this way. Unfortunately, running MOVPRT to check a whole scene required that the tape file be searched in its entirety, with every record being brought in to search for the SC 4020 frame instructions. This was generally a process that took several minutes of 360 time in itself, and this naturally added to the cost of producing each sequence.

Most of the lengthy sequences to be checked out involved loops, and generally DO loops. In that case, the motion involved could be made time dependent, and time set by a statement

      T= I + DT

where I was the DO loop counter. It was then a simple matter, for example, to sample every N frame by changing the statement

      DO 10 I= 1,500 
to
      DO 10 I= 100,500,100 

With IPLT = 4, this would produce four intermediate frames and the last frame of the sequence on Calcomp, which was often enough of a sample to assure that the sequence was correct.

Another method of checkout (This method was introduced by Odile de la Beaujardiere) used by our programmers was to set up an array at the beginning of the program in a DATA statement. For example, where N is a fixed point array name, the program might have

      DATA/N/ 24, 136, 95, 500/ 

These would be the end values of the counter in the set of loops generating the total of 755 frames in the production run. For the Kth loop the programmer would write

      L= N(K) 
      DO KK I=1,1 

In the checkout run, if only one frame of each loop was needed, the programmer would use the DATA statement

      DATA/N/ 1, 1, 1, 1/ 

This would be the only change necessary to produce the required number of sample frames and changeover to production status was correspondingly simple.

Ultimately, of course, many programs required some sort of ad hoc checkout procedure applied by the programmer to the given situation. This was one reason why the final cost of each part of the film differed so much, (as will be seen in the chapter on costs).

Checkout Procedure Using Disk Save

For checkout the useable feature of the disk save system is that the programmer can save the results of his programs as they are checked out. This can be done, for example, by the following sequence:

      REAL N 
      N = 1006. 
      IPLT = 5 (or 2) 
      CALL INIPLT 
      CALL SAVE(3) 
      (create image 3 by calls to LINEL, etc.) 
      CALL SAVE(0) 
      CALL COR2DS (3,N) 
      CALL FRAME 
      CALL ENDPLT 

If the image plotted on the Calcomp or printer was satisfactory, then the programmer already had it available for use in production, and did not need to spend the time to recreate it. In order to produce, say, a 250 frame sequence using images 1006 and METRWIRE, he could do it most efficiently by means of the program:

      N = 1006. 
      CALL DS2COR(N, 3) 
      DATA /NAM/ 'METRWIRE' 
      CALL DS2COR(NAM,4) 
      DO 10 I=1,250 
      CALL SAVE(-3)  
      CALL SAVE(-4) 
      CALL FRAME 
  10  CONTINUE 

We established a pictorial file of all useable images stored on disk, along with their identifying names. Programmers referred to this file before creating more images, and used existing images wherever possible.

Transition from Checkout to Production Runs

This method of preserving images meant that for many jobs the programmer manufactured all necessary images for all repetitive parts of filmstrip before his actual production run. The production run itself was then the absolute minimum in length and in execution time, since only the non-repetitive parts were generated directly.

Tape Length Estimat1on

Since we are charged a minimum of $25 every time a tape is mounted for plotting at Brooklyn Polytechnic, it was desirable to store the maximum amount of images on each tape. This in turn required an accurate estimate of the length of tape required for each scene. This estimate also kept the programmer from overflowing a tape due to ignorance of the size of his production.

The required length can be estimated as follows: each tape is 2400 feet long (plus 50 feet for leaders). We write all SC 4020 instructions in records of 2880 bytes, which is equivalent to 640 thirty-six bit SC 4020 instructions. We write at a density of 800 six-bit characters per inch, so that each record is 4.80 inches long. Adding on an inter-record gap of .75 inches gives 5.55 inches required for each 640 SC 4020 words. Hence, the entire tape can hold a maximum of about 5200 records. For safety's sake, we limited it to 5000 records or less.

Assume that we have two images to combine onto tape to form a movie sequence 1800 frames long. During checkout and storing of the images on disk, assume that the programmer has (or should have) determined that image 1 is 800 SC 4020 words long, and image 2 is 400 words long. The output sequence would be

      DO 10 I 1, 1800     
      CALL SAVE(-1)      
      CALL SAVE(-2)'     
      CALL FRAME     
  10  CONTINUE     
      CALL PLOTND(1)     

FRAME adds another SC 4020 instruction to the output buffer. Since each SAVE'd image has been rounded out to an even number of 4 byte words, the output subroutine pads out the single FRAME instruction with another seven 36 bit words to make the block even. Hence, the total number of words for each of the 1800 frames is 800 + 400 + 9 = 1209 SC 4020 words. This will require about

((1209 wd per frame) / (640 wd per record))*1000 frames = 3400 records

which should fit easily onto one tape.

Appendix 1 Job Control Language for Running MOVIES Programs

The following JCL is required for a MOVIES Calcomp checkout Some of the cards are optional. For a non-Calcomp MOVIES checkout job remove all the /+SETUP cards, the DD card with DSNAME=SYS1.PLOTLIB, and the //GO.PLOTTAPE card. The two extra cards needed for a production job are included below and their correct positions are noted.

Column 

1    678      15        25        35        45        55        65     72
//jobname JOB (2008,bin#,t,l,c),'programmer',MSGLEVEL=l
/*SETUP          $$$$$$                                       $$$
/*SETUP          NAMEname           BINbin#          PROJECTproj#
/*SETUP          PLOTNAMEplotname              DATEdate
   Make plotname agree with jobname to avoid confusion
   Add following card if submitting other than Sys.1
/*SETUP          JOB FROM name of location
/*SETUP          NINETREK,OUTPUT,PLOT
/*SETUP          $$$                                          $$$
//stepname EXEC FORTCCLG
//FORT.SYSIN DD *
       [Fortran source decks
/*
//LKED.SYSLIB DD DSNAME=SYS1.FORTLIB,DISP=OLD
// DD DSNAME=SYS1.PLOTLIB,DISP=OLD
// DD UNIT=2314,VOLUME=SER=UPDAS2,DISP=OLD,DSNAME=UP.2145.MOVY
// DD DSNAME=UP.2145.MOVY,DISP=OLD,VOLUME=SER=VOLUME,UNIT=2314
//LEED.SYSIN DD *
  [Object decks if any. If none, preceding card is optional.
/*
//GO,PLOTTAPE DD DSNAME-=PLOTAPE,VOLUME=SER=PLOT,LABEL=(,BLP),       X
//          UNIT=NINETRK
//GO.FT10F001 DD UNIT=2314,VOLUME=SER=DISK01,DISP=(NEW,DELETE),      X
               SPACE=(GYL,(1,1),RLSE),DCB=(RECFM=F,BLKSIZE=2880)
      For production runs replace above DD card with
               LABEL=(,BLP),DCB=(DEN=2,TRTCH=C,RECFM=F,BLKSIZE=2880)
      and add the following SETUP card immediately after JOB card
/*SETUP       tape#,7TRK,SEVENTRK,OUTPUT
//GO.SYSIN DD *
     [Data cards. If no data, preceding card options]
/*

All /* cards should be included even if options are not used. For sny continuations (which can occur only after the comma separating parameters) place an X in column 72. The continued card must have slashes in columns one and two and parameters begin in column 16.

Some Suggestions: Although the use of the plotter is not too expensive, it is a bit slow. The plotting routines may cause the running time for some programs to increase. The Calcomp output is rather large and hard to handle. Therefore, try to make use of IPLT as a switch. Change IPLT, in your main program, sampling Calcomp output at certain intervals. The length of a full roll of plotting paper is 127 feet, which could hold about 100 frames. However, Calcomp jobs are batched so that several jobs might come before yours on a single roll of paper. If this happens, you may get output in pieces, and some of the frames may be lost. Also, turnaround time for plot jobs is now about one day.

Try to keep the /SETUP card with the plotname and date current and identifiable.

Appendix 2 SCORS Package for OS/360

A2.1 General Description

The OS/360 version of the SCORS SC 4020 subroutine package provides the user with essentially all the capability described in the SC 4020 Programmer's Reference Manual (Stromberg-Carlson document No. 9500056). Usage of the routines is identical in nearly all cases to that described in the reference manual, therefore, this docunent will discuss differences between the manual and the 0S/360 system only.

A2.2 Language

The OS/360 SCORS package is written completely in Fortran (G or H level) with the exception of four small routines which are written in assembly language. Two of these are routines to perform logical shifts, the other two are tables of bit patterns used by the vector character routine VCHARV. The names of these routines are SHFT1V, SHFT2V, TABL1V, and TABL3V.

A2.3 Differences from Programmer's Manual

The fact that this version of the SCORS package is written in FORTRAN forces two fundamental differences vith previous versions:

(1) No routine may have an optional calling sequence of different length than the normal calling sequence. Where this restriction appeared to decrease capability it was circumvented by adding a new routine using the optional calling sequence (see PRINTV and TYPEV).

(2) No routine may pass locations of variables instead of values (see SCRRRV SERSAV and SERREV). No capability is apparently lost by this restriction although the usage of the three routines mentioned is changed some what,

Specific differences between the OS/360 version and the SC 4020 Programmer's Manual are listed below.

(1) APLOTV - The characters to be plotted (stored in the array MARKPT) are assumed to be represented by integers (that is right adjusted) if NC is positive, and by characters (that is left adjusted) if NC is negative. Thus if APLOTV is called with a Hollerith argument for MARKPT, or if MARKPT consists of characters read in with an Al field, NC must be negative.

(2) BNBCDV- The results returned by BNBCDV occupy two words, thus BCDD must be an array dimensioned at least two. The six characters are right adjusted.

(3) FRAMEV - Must always be called with an argument.

(4) FRMNOV - Called with argument only. This argument must be integer and sets the frame count number to this value.

(5) ID4DV - New routine used to pass ID information and cause an ID frame to be drawn CALL ID4DV (IDCARD).

IMPORTANT: Programmer should precede his production program with (in FORTRAN format):

      BLOCK DATA 
      COMMON/ID/ IDCARD 
      INTEGER IDCARD(2O)/ id text as described below or is blank/
      END

The information in IDCARD is:

IDCARD(1) to IDCARD(16): 64 characters of comment 
IDCARD(17) and IDCARD(18): 8 character job name 
IDCARD(l9) and IDCARD(20): 8 character date 

CAUTION: ID4DV now calls LINEP, which means that IPLT must be set before CAMRAV is called. The first call to CAMRAV causes a call to ID40V.

(6) KWKPLT - May be called with five argument only CALL KWKPLT (LX,LY,N,18H(LH),18H(LV) )

(7) LOCSAV and LOCSTV not available.

(8) NOFRV - May be called with one argument only CALL NOFRV (NOFRMJ ) Returns the current frame count number.

(9) PLOTDD - Output routine equivalent to PLOT and (PLOT). It accepts a 36 bit 4020 instruction, packs it into a buffer and automatically spills the buffer when full. It is not referenced directly by the programmer.

(9a) PLOTND - Must be called with one argument CALL PLOTND(n) where n is any argument. An EOF is written by PLOTND. PLOTND must be called at the termination of the run to insure dumping the output buffer.

(10) POINTV - May be called with three arguments only CALL POINTV(X,Y,NS).

(11) PRINTV - May be called with four arguments only CALL PRINTV (N,BCDTXT,IX,IY). The "type current point" feature is utilized by a new routine TYPEV that is the same as PRINTV with two arguments in other versions of the SCORS package.

(12) RESETV - May not have an argument. Use FRAMEV if corner marks and frame counting is desired.

(13) SCERRV - The arguments represent values of the scaling error indicators instead of locations. CALL SCERRV (KX,KY). Stores the current values of these indicators in KX and KY. Thus it is necessary to call SCERRV after using NXV and/or NYV instead of before as described in the manual. See also SERSAV aid SERREV below.

(14) SCOUTV - Not available.

(15) SERSAV and SERREV - These retrieve and restore the values of the scaling error indicators respectively. Thus, SCERREV and SERSAV both perform the same function value. SERREV allows the programmer to set the indicators to whatever value he desires.

(16) TABL-V - There are only three tables of vector characters implemented at the present time. These are TABL1V, TABL2V, and TABL3V. The structure of vector character tables is somewhat different than described in the manual in that each 12 bit pattern occupies one half vord (16 bits) of core.

(17) XAXISV and YAXISV - May be called with three arguments only. CALL XAXISV (IX, IY, NSTPT).

(18) TYPEV - New routine to utilize "type current point" feature of the 4020, Usage is identical to the description in the manual of PRINTV with two arguments. CALL TYPEV (N, BCDTXT)

(19) SHFTIV - New routine to perform logical one register, off-the-end shift, CALL SHFTIV (AIN, AOUT, NS) where AIN - input AOUT = result NS a number of bit positions to shift, left if +, right if - Vacated bit positions are filled with zeroes.

(20) SHFT2V - New routine to perform logical two register, off-the-end shift. CALL SHFT2V (AIN, BIN, AOUT, BOUT, NS) arguments similar to SHFT1V, On a left shift (NS+) bits are shifted through bit position one of BIN into bit position 32 of AIN, vice versa for right shift. Vacated positions are filled with zeroes.

(21) ANDV - New routine to perform logical product of two quantities CALL ANDV (A, B, C) performs C = A ∧ B

(22) ORAV - New routine to perform logical turn of two quantities CALL ORAV (A,B,C) performs C = A ∨ B

NOTE: The shift routines and ANDV and ORAV are general purpose routines whose use by the programmer does not imply use of any other portions of the SCORS package.

Output Tape - The output tape number is compiled into PLOTDD as Fortran logical unit 10. It can be changed by CALL TPNUMV (INTAPE) or by recompiling PLOTDD.

Since the SC 4020 is a 36-bit machine, each record written by PLOTDD must be some multiple of 36 bits. Because the tape is written with Fortran statements each record must also be a multiple of OS/360 half-words (16 bits), The buffer length (currently 720 360-words, containing 640 SC 4020-words) is so chosen that it satisfies both these constraints. When a partial buffer is written however (which happens only when PLOTND is called) it may be necessary to pad the record. Stop type instruction (octal 12) are used for this function.

6. Film Editing

In the generation of computer animated film by one technique we have developed, there are actually two stages in the process of editing before the final copy of the film is produced. The first is in the checking and possible correction of the 35 mm originals before printing through color filters onto 16 mm film. The second is the actual process of editing the final workprints to match the sound track.

At the present state of the art, the SC 4020 is not yet 100% perfect in reading the tapes generated by the IBM 360. There are therefore occasional tape reading errors during the generation of the 35 mm originals, and these will cause some bad film frames. In addition, it is likely that some programming errors have gone through even the fairly elaborate checking process used during checkout of the digital programs. Therefore, the director should check the black and white 35 mm originals with a synchronizer and a light table before sending them to be printed in color. A synchronizer is a device consisting of one or two toothed wheels, each exactly one foot in circumference. The 35 mm or 16 mm film is held against the wheel at one point by two rollers, so that the wheel turns as the film passes, and thus the editor can measure how much film has been wound off a reel to any given frame. A counter tallies each foot (see Figure 6.1). The director is able by means of the synchronizer to check the exact frame counts between cues, to make sure they are exactly the same as specified in the action flowsheets. It is possible to make some limited corrections or deletions at this stage.

Figure 6.1: A Four-way Synchronizer

Frames can be deleted by cutting the film and splicing, using a professional hot splicer, This is a great advantage of 35 mm originals over l16 mm originals. Splices in the 35 mm film do not disturb the focus or registration of the image, which would be the case if the film used for printing were 16 mm In the case of 16 mm film originals, the director would not be able to make any splices, since there would be a definite jump in the printed image. The fact that splices are possible also allows the insertion of corrections, and this has been done in some sequences of the Fields and Waves movie. The synchronizer and splicer were rented from Calvin-De Frenes at a nominal coat ($12 per week for the synchronizer and $9 per week for the splicer).

It is also possible to correct some errors during the printing of the originals onto the color 16 mm film. The director can specify the manner in which every frame is printed, so that, for example, a given frame can be held (freeze-frame), or sections skipped. It must be borne in mind, however, that all of this is being done by hand, at the rate of $35 per hour, so that any complication will add considerably to the cost. In discussions with Calvin-De Frenes technicians, it became clear that-such things as cyclic phenomena would be very expensive to reproduce by manual cycling of the optical printer.

The process of editing the final copy of the film from the workprint is described in good detail in the book The Technique of Documentary Film Production by W. H. Baddeley, Chapter ll, "Editing". This is the most useful and practical description we have found in the literature. (The Technique of Film Editing by K. Reisz covers only the artistic aspects of the topic,)

For completeness, however, we shall give a summary of the steps in the process of film editing. It should be noted that the skill and equipment required to do a professional job of editing can only be supplied by a professional film studio. In our case, we used the services of Calvin-De Frenes Studios.

Step l. Checking the workprints. When the 16 mm color film is returned from the optical printer, it is labelled with a warning "ORIGINAL - DO NOT PROJECT". This is the film (which we call the master print) that has actually been printed by the aerial image camera. Since it has the original images on it in their best possible focus and registration, it is very valuable film, and must not be handled any more than absolutely necessary. Until the final film is to be printed, it should be stored away and never opened, (Calvin-De Frenes has a special archives room for all original prints in its possession.) There will be another reel accompanying the first, and this one will be labelled "PHOTO WORK-PRINT". This is a copy of the master, and is the copy to be used for projection during checking, and for cutting when the film is edited. The workprint has a consecutive serial number printed at the edge every foot along its length.

These are called edge numbers, and they are matched exactly by numbers printed on the edge of the master 16 mm print. These numbers allow the master film to be cut to match exactly the final form of the edited workprint, when the master is being prepared for printing.

The first step in editing the film is simply to project the workprint in order to check it for technical or printing defects. If all is in order, it can be stored away until all the scenes are ready.

Step 2. First assembly. After all the prints for the film have been printed, the workprint for each of the scenes should be spooled on a separate 50 foot reel, labelled, and the scenes stacked in order. The assembly itself is done at an editing bench with the professional editor manipulating the film reels and doing the splicing. The director sits beside the editor and checks the picture and sound as they are assembled. The director has a copy of the script to make sure all cues are timed properly with the sound track. The sound track is on magnetic perforated filmstrip that is the same size as the 16 mm film itself, and it was recorded at the standard speed of 24 frames per second. The film is viewed by means of a small illuminated screen as it is reeled from the individual scene spools onto a single, large, take-up reel. At the same time the magnetic sound track is passed over a magnetic pick-up head attached to an amplifier and speaker, so that the picture and sound can be seen and heard synchronously. The editor is able to expand the sound track by adding sections of silent track. He can also do a limited amount of shortening of the sound track, if there are particularly long periods of silence, although, care must be exercised in order to keep the pace of the narrative in its natural rhythm. The editor is even capable of deleting words from the sound track, or re-arranging them, but this requires a great deal of skill, and is very risky, as might be imagined, since the narrator's tone will change from word to word in an unnatural way, if it is not done properly. Since the animation has been generated to match the sound track, there should be little if any need for tampering with the sound track, except for adding periods of silence where the action flowsheets had been written requesting it to fit the pace of the visual scenes. It is also possible to cut the visual sequences, in order to match the sound track, or speed up the pace of the film. Again, however, it should not be necessary in too many places, if the film has been properly generated to a previously written script.

As the scenes are cut and spliced together, the editor will mark on the workprint where the cuts have been made, to aid in the subsequent matching with the master print. In addition, he will mark where the fades and dissolves are to occur, and how long they are to be. The standard markings are shown in Figure 6.2.

Figure 6.2: Conventional Cutting Room Signs, Marked in White China Marker Directly on the Workprint by the Editor

Step 3. Screening the edited assembly. After the picture and sound track have been matched, and the entire workprint assembly spliced into one continuous reel, they are ready to be projected on a device called an interlock projector. This is a projector that reads the sound track reel in synchronism with the projected workprint. This is the first time the director is able to see the film in the form it will appear before an audience. He should be prepared to take notes of possible errors, or changes that should be made in order to sharpen the pace of the film. If the changes are minor, there is no need for seeing the interlock projection more than two times or so. If there are major changes, there may be a need for several round trips between the editing room and the projection studio.

Step 4. Conforming. This step is taken care of by the professional conformer. It involves matching the color master prints to the final assembly of the workprint. This is achieved by matching the edge numbers of the master and workprint. From the point of view of the director the significance of the conforming process is the fact that the master prints are assembled onto two rolls called the A and B rolls. Both rolls start from a common cue marked on the leader. The scene up to the first transition (which may be either a cut, fade, or dissolve) is assembled in roll "A". The scene which is to disappear is followed by blank leader. The scene to which the transition is to be made is placed in the "B" roll which consists of blank leader up to that point. (Two scenes which are to dissolve into one another are overlapped by the length of the dissolve.) The scene continues on roll "B" until the next transition is reached, roll "A" being padded out with blank leader. This procedure is followed throughout both rolls in what is called "checker-board" fashion. Figure 6.3 shows a typical assembly and the resulting print. The reason for assembling the master in this fashion is to print copies of the film on which the splices are invisible. If the master print were simply spliced at each cut, there would be an overlap onto one of the frames and this would appear as a dark band across the screen when the print is projected. The A and B roll assembly is spliced in the manner shown in Figure 6.3, so that all splices overlap only on the blank portions of the rolls. The A and B rolls are then printed in the following manner. Roll A is printed first, a fade out being inserted at the end of a scene that is to dissolve. The printer shutter remains closed while the blank leader that follows it passes. When the next transition is reached the shutter opens again, producing a fade-in, if required, and remaining open during the ensuing scene. A roll has been printed, the raw copying stock is wound back to the start again and the B roll is threaded up, the starting cue placed against the same frame of the copying stock as was the starting cue of the A roll. The same procedure is followed, but the fade-ins on the B roll will now be superimposed on the fade-outs of the A roll, and vice-versa, to produce the dissolves. Since the cuts match up on the two rolls, the transition is made with no splice showing.

Figure 6.3: "Checker Board" Assembly of 16mm Masters to Produce Invisible Splices, Fades, and Dissolves. (Note that cross hatch denotes black film.)

It should be noted that the splices for cuts and dissolves are not made exactly at the cut point, but rather four or more frames afterward. This is to avoid having a splice actually being in the gate of the projector when a frame is being printed, since this would cause a slight jump, and possibly throw the film slightly out of focus. Small metal tabs on the A and B rolls cause the printing lap to turn off at exactly the right frames, so that ,there is no overlap of images on the print. This requirement for overlap, however, should be borne in mind when the animation is generated, since the extra frames must be provided, if there is any reason to expect that the film might be cut at that point. The same precaution applies to dissolves. The director should provide extra frames, at least nine, besides the 16, 24, 48, or whatever, required for the dissolve itself.

If there is any question as to exactly where the transition is to be made, it is better to be on the safe side, and provide an extra foot or so on each side of the transition, so that the editor can have some margin to work with.

Step 5. Setup for titles. Before the final film is printed, the titles must be set up. In the Fields and Waves movie, the opening titles and credits were printed by the Calvin-De Frenes art department, and were printed superimposed on live-action sequences that had been photographed at the University City Science Center computer.

Step 6. Printing the answer print. The conformed A and B roll assembly along with the magnetic sound track are sent to the film laboratory for printing. The film laboratory first prints an optical sound track from the magnetic track. They then print together the A and B rolls and sound track to make one copy of the final film. The copy, called an answer print, is returned to the director, who projects it to check that the sound and visuals are exactly right. This is the final step in the process of making a film. If the answer print 1s correct, the A and B roll masters are then available to make all required copies for release !to the public.

7. The Costs of Producing Computer Animated Motion Pictures

One of the objectives of this part of the MOVIES project was to find out the costs of producing a full length documentary by means of digital computer. In addition, it was anticipated that all means of reducing the costs of such animation would be studied and other means of reducing them developed.

The costs can be divided into several categories:

  1. Planning the accordion, script writing, and setting up the action flowsheets and object definitions.
  2. Recording the sound track.
  3. Programming and production on the digital computer of the SC 4020 printer instructions.
  4. Printing on the SC 4020.
  5. Setup and printing on the aerial image camera.
  6. Editing and laying the sound track to the visual images.
  7. Conforming the master print to the edited workprint and printing of the answer print and release prints.

With our present technique, items 2, and 4 to 7 are more or less fixed. They utilize standard techniques that are well defined and these costs can be reasonably accurately estimated before production. Included in this part of the cost is the preparation of titles, if they are to be produced in a conventional manner. Table 7.l shows the costs of producing these parts of the Fields and Waves movie, These cost figures were supplied. by Calvin-De Frenes Studios, who did the work for the items listed.

Table 7.1: Cost of the Non-computer Related Items Involved in the Production of the Fields and Waves Movie
$
Titling Sequence
(Shown over live action sequences shot in the computer room)
Title cells - 11 acetate cells in color ($6.50 up to 3 lines - over 3 lines $2/line)116.00
Title drop shadows - 11 black drop shadows ($2 each) 22.00
Aerial Image 16 mm Ekta title photography (titles over live action (3 hrs. @ $50 for 1st hr., $35/hr. thereafter)120.00
Title sequence Ekta stock, processing and E/N Workprint (125 ft. @ $0.233/foot) 29.12
Optical Printing, Editing and Preparation
Preparation of computer scenes for Aerial Image 45 hrs. @$10/hr.)450.00
Aerial Image 16 mm master in color, of computer strips for 10 scenes, each consisting of a double pass for separate colors. - Set up for each scene@ $15 each150.00
Photography of 925 ft. 16 m. (double pass) in 37 sections @ $27/section 1155.00
Aerial Image 16 mm master in color, of computer strips for 10 scenes, each consisting of a double pass for separate colors. Set up for each scene@ $15 each150.00
Photography of 375 ft, 16 mm (triple pass) in 15 sections@ $35/section601.00
Animated sequences EKTA stock, processing and E/N orkprint (1500 ft.@ $0.233/foot)349.50
Editing workprint and mag. track (32 hrs. @ $15/hr.)480.00
Workprint and magnetic track interlock screening (1 hr.@ $25/hr.) 25.00
Open and Close Library Music Only@ $75/set 75.00
Individual Sound Effects@ $15 each (minimum 3)45.00
Two-Channel Mix of narration and sound effects including magnetic tape - 40 min. or 1440 ft. ($100/10 minute reel minimum), ($0.275/ft. after minimum)396.00
Conforming original into A 6 B rolls@ $60/360 feet - 1440 ft.240.00
Optical B-Wind Neg. track transfer @ $35/360 feet - 1440 ft. 140.00
Preparation of printer effects, dissolves, fades, and/or straight cuts @ $25/360 feet - 1440 ft. 100.00
One sound Kodachrome answer print (1465 feet@ $0.167/ft.) 244.66
Vacumate film treatment (1465 feet @ $0.003/ft.) 4.39
One 1600 ft. reel, can, case (@$2.60 each) 7.80
Shipping and telephone to and from Kansas City60.00
TOTAL5040.47

It can be seen that one of the primary costs is preparation of the 35 mm originals for printing, and the cost of the aerial image camera itself. The aerial image camera has a standard cost of $50 for the first hour of use, and $35 per hour thereafter, for the duration of the particular job. This means that it is desirable to submit for printing as much as possible of the film at one time. In our case, since the originals were being produced over a period of about six weeks, and it was also desirable to know how the early productions looked, the originals were submitted in four different batches. This almost certainly increased the cost of printing somewhat, but since it allowed checking as we went along, allowed us to produce the film in shorter time. Given the cost differential for the first hour of each use, this added cost was only about $45.

It can be seen that the per-minute cost of film naturally depends on the number of different colors required. This is true, in addition, for all steps in the computer production, of course, since each color added a like amount of programming, checkout, and production costs. Table 7.2 gives the computer costs actually incurred in the production of the Fields and Waves movie. In addition, it gives a breakdown of those costs by scene and by programmer. (The actual names of the programmers have been omitted, of course, and letters used instead: Programmer A, B, C, etc.)

These figures clearly indicate that the costs of production varied considerably from scene to scene. The variation between one programmer's costs and another seems very large. That is, some programmers seemed to be more efficient than others, even though they were given scenes of the same or greater complexity.

It is also true that the cost of production did decrease with the scenes produced later in the project. From this we can conclude that there was a learning process going on, and that we could expect costs to go down as later films are produced.

However, in order to achieve, drastic reductions in the cost of computer animation, it is the writer's opinion that a specialized graphics system will be the ultimate answer, with features designed according to the requirements discovered during the course of producing the Fields and Waves movie, The overall design of this system will be given in the chapter on conclusions.

Nevertheless, the cost figures seem to indicate that our present system of computer animation is already capable of producing movies at a cost competitive with conventional animation, which generally costs a minimum of $1000 per minute for the simplest sequences.

In the case of computer animation, in contrast to hand animation, the author feels that the possibility of reducing costs is definitely good. Hence, we are starting at a point where the costs of computer animation are already competitive, and can.proceed to set up a design that will reduce costs by a considerable amount.

Before going on to the chapter on conclusions, it is necessary to discuss items one and three of the list of cost categories. The cost of planning the script and accordion is quite unpredictable.

This should not be surprising, since this is the most creative part of the process, and requires in general the most open-ended kind of work, On the other hand, if the scriptwriter has a firm idea of what he wants to say, and how he expects to say it, the process of writing might not be too lengthy, It is advisable to hire the services of a professional writer, as we have done, if the movie is to be one that will be released to the general public, In our case, the writer was Tom Purdom, and the total cost for his services during the writing of the Fields and Waves movie was about $600. (This is 20 weeks at $30 per week for his part-time services during the writing of the script.)

Setting up the action flow sheets and object definitions from the script is a well-defined task and was accomplished in a period of about four weeks by two people (Mr S. V. Sankaran and the author). This would indicate a cost of about $1000. Actually, this item is included in the overall labor costs of the film,

Finally, there is the item of the cost of printing the original filmstrips on the SC 4020. The minimum cost per job (which is a mounting of any single tape) is $25, and the overall rate 1s $100 per hour plus .5c per frame of 35 mm film. Since the SC 4020 can produce an average of about 200 frames of film per minute, this means that tapes containing less than about 2000 frames and using less than 10 minutes of time are a losing proposition. At or above this minimum, the cost of producing original filmstrips is about $20 per minute of a single color, so that the cost is $60 or the color film strips we produced. Because of the minimum job cost on the SC 4020, we tried as much as possible to include several minutes of film on each tape run.

Table 7.3 shows the overall costs of producing the Fields and Waves movie, with a breakdown on per minute costs for the various categories. In the next chapter we will consider the ways in which the costs related to the use of the computer can be reduced,

Table 7.4 gives the total costs incurred during software development and production of the Fields and Waves movie. It should be noted in particular that the development costs are non-recurring, since these programs are available to other users, and, in fact, more movies are being produced using these routines.

It also seems reasonable that the programmers involved in movie production should become more efficient as they gain further experience and establish a backlog of image generating routines.

Table 7.2: Computer Costs for Checkout and Production of the Fields and Waves Movie (Does not include overhead for peripherals, tape rental, Calcomp plotter etc. Add 10%)
Programmer/Scene Total Duration of Scene (min) Total cost ($) Cost per Min ($) No of Colors in Scene Cost per Min per Color Remarks
A-Scene 160.72 1001382 69Step waves, some curved lines, fixed background
A-Scene 122.36 6002543 85Step waves, straight lines only, fixed background
B-Scene 93.12 5131652 83Step waves, mostly cyclic
C-Scene 1.21.26 2602002100Three-dimensional straight vectors, cyclic
D-Scenes 2 34.4514003152158Two-dimensional, curved E-field lines
E-Scene 17.21.08 2912802140Three-dimensional, straight vectors, some non-cyclic
F-Scenes 13 143.4514004052203Two-dimensional, curved sine wave, some non-cyclic
F-Scene 17.10.36 1003232162Two-dimensional, curved lines, non-cyclic
G-Scenes 6 71.46 6304382219Two-dimensional, some curves
G-Scene 101.50 6584382219Two-dimensional, many curves, all non-cyclic, Difficult
H-Scene 81.50 5503684 92Two-dimensional, straight lines, non-cyclic
H-Scene 112.0815007203240Step waves, some non-cyclic curves. This was high
I-Scene 1.12.00 6003003199Two-dimensional, curves, cyclic
I-Scenes 4 52.3018007852393Three-dimensional, many curves,some cyclic, very difficult
J-Introduction1.25 5004003133Two dimensional, curves, cyclic
J-Scene 152.5016006402320Two dimensional, curves, cyclic. This was much too high

Overall average cost: $400/minute, including overhead) for color film

Table 7.3: Overall Cost Summary for the Production of the. Fields and Waves Movie (Costs are average per minute of finished film, based on 35 minute length)
$
Digital Computer Expenses, including checkout, production, and overhead (see Table 7.2)440
Programming labor, keypunching, etc. (including prorated overhead)306
SC 4020 printing costs60
Optical printing, editing, sound track, titling, etc. (see Table 7.1) 159
Total average per minute cost965
Table 7.4: Total Cost of Film Production and Software Development
Computer Expenses (including Overheads) ($) Labor ($)
1. For Film Production154007760
2. Development of Routines
PLOTDD, including change to FASTBUF, SAVE (fast tape I/0 routines, with 5 buffers), COR2DS, DS2COR, OVRLAY, RESTOR (disk I/O, and storage manipulation routines) 20002386
LINEL (image plane) 200350
PAN, TILT, ZOOM 350600
Calcomp adaptation of LINEP, FRAME, INIT, INIPLT, ENDPLT 200400
DOTLIN, LETTER, YDASH 200600
MOVPRT, PLOTSAV 100300
Camera routines for 3 dimensions, including POINT, LIME2, UPDATE 9501723
3, Labor overhead and benefits 5394
TOTALS1940019493

8. Conclusions

8.1 Principles and Techniques Developed in This Dissertation

This dissertation presents one major principle, several subsidiary principles, and a number of underlying techniques for the mass production of computer animated motion pictures. The major principle is that the motion picture must be completely planned and specified in comprehensive, all-inclusive detail, including the intent of each scene and each sequence of scenes, the narration and sound background, the transitions between scenes, and the formal definition of all the computer programs to be written for every scene and for each color separately, before any implementation is launched, Only in this way will it be possible to prepare a truly professional quality motion picture that takes advantage of existing conventional and computer-generated techniques, where the computer may be digital or analog or hybrid or simulated television or what have you. Only in this way can the creator of a full-length film be enabled to control the skills of the group of technicians who will do the production itself. The formalization of this principle allows production of major films, because the creator is freed from direct consideration of the technology he will utilize. A corollary of this is that the techniques we have developed from this principle have a universal applicability.

The subsidiary principles are described below. Underlying the major principle are numerous techniques, some of which can trace their origin to techniques that already were in existence, but had to be basically redeveloped to become applicable to this new medium. In particular, these techniques incorporate the following items, which taken together, I call the Scenario Description Language:

This formalized method of defining extensive visual sequences does not appear to have precedent in literature in any journal in the computer animation field: Moreover, the system of supporting computer routines designed for it are a special design required for the Scenario Description Language. There are some dim adumbrations of some parts of the system that were partially developed elsewhere, but our system has been designed especially for use under the Scenario Description Language.

It was designed to provide an efficient, standardized, and practical way of communicating between the creator and programmers. It allows the direct translation from the standardized Scenario Description Language specifications, in particular, the Object Definitions and Timing Charts, through the Color- and Sound-Action Flowsheets to a computer program utilizing:

These allow, in highly efficient fashion, specification of commands for:

A subsidiary principle developed is that normalized standardization is required for large scale production. Developed here is a standardized, practical system designed for efficient mass production.

Another subsidiary principle is that all scenes be structured into three separate packages; background settings, props, and dynamic objects. The various image components are manufactured at different times by different programmer's, and are stored on disk in either final image form or as an image generating routine. The efficiency of production has thus been considerably increased by this means, and it has been made possible only by this means, and it has been made possible only by the technology I designed and developed, as described in this dissertation.

The three-dimensional camera described here is, I believe, new in its conceptual design. Perspective projection, as described by Zajac (Reference 8.1), or Kubert-Szabo-Giulieri (Reference 8.2), for example, does not allow for spatial manipulation of the camera as a coordinate system in itself. They all utilize a projection plane of finite dimensions, which evidently can be manipulated, but requires an additional transformation. Moreover, again, the easy definition and manipulation of this camera allows for standardized communication between director and programmers that is absolutely necessary to efficient mass production.

Some specific originals include:

8.2 Anticipated Future Developments

It should be noted that the devices discussed here, especially as depicted in Figure 8.2, are being developed for movie production at the Moore School. This work is being carried out by P. Talbot, R. Coulter, R. Hwang, and D. Callahan. A master's thesis is being prepared by Hiss Talbot describing a language for a PDP-8 to IBM 360/65 movie generating system. The working title of Miss Talbot's thesis is 'Using the DEC-338 as an Input Terminal for Movie Making". Mr. Coulter is also writing a master's thesis with working title "The DEC-338 and Spectra 70 as Parallel Processors in Movie Making". This work is being done under the direction of Professor John W. Carr, III. Other students who contributed to the MOVIES Project include: S. S. Soo, R. Russell, K. Selemon, J. Mesirov, and A, Hayes.

Figure 8.1 shows a pictorial summary of the process of computer movie production developed by our section of the Movies project. Referring to this diagram, it seems that Steps 6, 7, and 8 are the places where improvement is most possible.

Figure 8.1: Summary of the Steps Involved in Producing a Full Color, Computer Animated Motion Picture with a Soundtrack

In particular, we propose the following:

1. An Increase in the Machine Code Efficiency of the Existing MOVIES Routines

The existing SCORS package and our extensions as described in Chapter 5 can be given an increased efficiency by a translation of these programs from FORTRAN IV into IBM 360 machine code. This is particularly true of the disk routines, and to some extent true of LINEL, LINEP, and PLOTDD. That the disk routines can be made at least one order of magnitude more efficient by machine code programming seems borne out by the comparable increase in efficiency obtained by the change from FORTRAN tape output to machine code tape output (the FASTBUF routines).

Another increase in efficiency could be obtained by designing routines that would carry out some of the manipulations of images at the level of the SC 4020 instructions themselves. Such routines have been implemented on the IBM 7090, although apparently they did not have a backup of a higher level algebraic language such as the MOVIES package has. For example, if an object is already defined by a set of SC 4020 instructions, and the programmer wishes to carry out a PAN, the most efficient method would be to modify the proper x coordinates in the array of instructions. There would be the need for windowing logic, but the chance for increased efficiency is still very good.

2. Extension and Development of Languages Useful for Graphics.

This proposal breaks down into two categories: a) Extensions of FORTRAN IV or PL/I, if they are possible, and b) Development of a new language, which will be specified here. Part (a) might be somewhat more in the realm of possibility, if we can get access to the FORTRAN IV compiler, either on the IBM 360, or the RCA Spectra 70. Part (b) would require a reasonably complex compiler, if it were to be truly efficient, although it could be implemented by means of an interpreter. An interpreter would be inherently slow, however.

a) Extensions to FORTRAN IV: These were mentioned in the introductory section on the programming philosophy of our part of the MOVIES project. In particular, it would be useful to have the following:

INTERNAL FUNCTION statement,
which allows the programmer to define (i.e., bind the name of), at the local level, a closed subroutine or function. This would allow an easy definition of the object-defining parametric functions.
DO-LOOP SAMPLING PROCEDURE,
for checkout. This would take the form of SAMPLE N1, N2, Ni, ... preceding the DO statement. The DO 10op would be executed only for the values of the counter N1, N2, N3, etc. (not necessarily at even intervals, hence not achievable with the present FORTRAN IV). These SAMPLE statements would be inserted for checkout runs, and then removed for the full production run.
IN LINE CODING
is a feature that would make high speed I/0 much more convenient, both in the case of tape and disk.

b) A Specialized MOVIES Language: This would be implemented either by an interpreter, or, preferably, by a compiler with a specialized logic for producing code of maximum efficiency. The grammar is given below.

The programmer defines each object as a NAME followed by a list of components:

NAME(COMPl, COMP2, ,.. ) 

Each component in turn is defined by a NAME and list of components:

COMP1 (COMP11, COMP12, ...) 

The primitives are lines defined in three dimensions. They may be either (1) Constants, or (2) Functions of x, y, and z, or (3) Functions of x, y, z, and time. This is an important distinction, since (1) and (2) may be defined and computed once, while type (3) must be recomputed frame by frame.

Primitives are the following:

LINE((x0,y0,z0),(x1,y1,z1)))
which simply draws the Perspective projection of the line between the vectors X0 and X1 defined above in brackets, onto the picture plane, after the vectors have been converted to camera coordinates.
FUNCPLT(<Algol or FORTRAN IV program defining Dx, Dy, Dz (Fx(x,y,z,t), Fy(x,y,z,t), Fz(,y,z,t)).
The function evaluates the FORTRAN IV program and then computes Fx, Fy, Fz for each pass through the loop, plotting a Perspective line from the previous triple (Fx, Fy, Fz) to the triple (Fx+Dx, Fy+Dy, Fz+Dz). Note that the compiler must set up a special list for those functions that are time dependent, in order to update them frame by frame. These functions are defined by algebraic expressions and the names bound at the local level. Other primitives might be convenient, such as CIRCLE, SIN, TITLE, etc.
CAMERA(x0,y0,z0,f,a1,a2,a3)
which defines the posit1on and orientation of the camera with respect to the master coordinate system: (x0,y0,z0) are its position, f is its field of view, a1,a2,a3 are its Euler angles. These variables could also be given as algebraic expressions dependent on the space variables and time. Thus they would allow the full range of camera motions: (pan, tilt, zoom, truck, etc). Note that we assume that the camera is looking out along its own minus z axis, as defined in Chapter 4.
CYCLE(OBJECT, DT)
defines OBJECT to be cyclic, with the total cycle time taking DT to complete, If DT = 0, then OBJECT is a hold (same picture each frame).
ROLLEM(OBJECT1, OBJECT2, ... T0, T1)
causes the action to begin, and the scene to be generated, Each object named is drawn in each frame of the sequence, with the shape and location of each object being evaluated for the time value of each frame. The scene is generated for time flowing from T0, to T1.

Sample Program

The reader will notice that the above language is defined for three dimensions only. This is not necessarily a limitation, since all two-dimensional objects could be displayed as flat objects in front of the camera. If this were not satisfactory, a change to two dimensional routines would be relatively simple, The following scene is set up as a two dimensional sine wave displayed in three dimensional space, with the camera at some distance away.

The program draws an axis, an incident wave, a reflected wave, and a generator represented by a circle with a flashing sine wave inside the circle. Figure 8.2 shows a typical frame in the sequence. The sample program is given below.

WAVPLT (AXES, GENERATOR, INCAV, REFAV) ; 
AXES(LINE((0,3,0),(0,-3,0)),LINE((0,0,0),(6*π,0,0))); 
INCWAV(FUNCPLT(FOR X=0 UNTIL(X.EQ.X0) ,DX, 
     (X,COS(W*(T-V*(X-X0))),0))); 
REFWAV(FUNCPLT(FOR X=O UNTIL(X.EQ.X0),DX, 
     (X,RHO*COS(W*(T+v*(X-X0))),0))); 
V.4); RHO(-.5); 
GENERATOR (WIRES, FLASHCIRC) 
WIRES (LINE((0,3,0),(-2,3,0)),LINE((0,-3,0),(-2,-3,0)), 
      LINE((-2,3,0),(-2,1,0)),LINE((-2,-3,0),(-2,-1,0))); 
FLASHCIRC (CIRCLE((-2,0,0), .05) ,FLASH) 
     FLASH(IF((T-[T/.5]*.5) .LT..5)GENWV ELSE NIL); 
GENWV (FUNCPLT (FOR X= -2.5 UNTIL X.EQ.-1.5, 
     .05,(X,,25*SIN(π*.25X))); 
Figure 8.2: Frame Drawn by Sample Program in Movies Language

The above statements define the two waves, the axes, and the generator. Note that the definition of the flashing circle includes a 'time dependent' test (the sine wave appears only for the first half of every second).

The scene is then set and produced by the statements.

  CAMERA((0,0,-1000),TFOV,PHI,0,0); 
  TPOV(6./(1000*π)); 
  PHI(-6./(1000*π)); 
  ROLLEM(WAVPLT,0.,10.) 

This would produce 10 seconds of the defined sequence. If the programmer then wished to make the reflection coefficient change over a period of 20 seconds, he could add the statement

   RHO(.5((20-T)/20.)) 

Then the sequence would be generated by

   ROLLEM(WAVPLT,0.,20.) 

The new definition of RHO would automatically supersede the old definition, and the reflected wave would automatically then have its amplitude reduced to zero during the 20 second period.

Our next proposal for the MOVIES project is:

3. Development of Improved Checkout and Computation System

The improvement of the hardware facilities available to the MOVIES project can grow in two ways: First, in the utilization of existing hardware after some software development, and second, construction of a specialized processor and display system for the production of movies only,

First, we consider the use of existing hardware. Figure 8.3 shows the configuration that is at least partially already being set up at the Moore School. The connection to the RCA Spectra 70 is apparently still in the future, but the particular central processor to be used is a detail. The features required for useful application of the system to the process of movie making are:

Figure 8.3: Movie Generating System Using Existing Hardware
  1. Fast-access time sharing so that the programming can be conversational. The programmer should also be able to set up production jobs and monitor them from the teletype console.
  2. Dual-option display software. The programmer should be able to program by typing in his program at the teletype and having his program text shown on the DEC 338 display scope. Then, when he wishes to view a particular sequence, he should be able to change the function of the scope to an image displaying device.
  3. Light-pen drawing and editing capability. This would allow the Object Definition part of the movie making process to take place at the programming level itself. This would considerably shorten the time and effort required to write the programs required to produce a particular movie sequence. In addition, the programmer should be able to edit the text of his programs by means of light-pen editing.
  4. Image-display during production runs. The programmer should be able to instruct his production program to display every frame of the sequence being generated by the production program at the time it is actually being computed. This would eliminate one of the present problems of production, namely, that the programmer is never absolutely sure that the sequence being produced is exact in every detail until the film is printed. (The subroutine MOVPRT allowed our programmers the closest approximation to this, but this still did not give us complete detail, and it was furnished only after the entire sequence had been executed.)

Such a system should ultimately allow the director himself to take over the tasks of the programmer. Since he is allowed to make the object definitions directly into the computer, by the combined means of light-pen and MOVIES language, he should be able to generate the program entirely at the display and teletype console, and execute the production program at once.

Figure 8.4 shows the configuration of a further possible step in the design of movie production hardware. The basic component of this system is the raster display. The idea is outlined in an article in the July 1968 issue of The Communications of the ACM (Reference 7.1). According to the article, this configuration would require a considerable outlay for the original hardware - about $50,000 for the basic components. These would consist of several TV-type raster displays plus storage drum and associated circuitry. The computational load to generate the display images is rather high, as pointed out in the article, since an image requires a 0 or 1 bit to be computed for each of the 250,000 addresses on the raster face. On the other hand, once the image has been computed, the CPU can be devoted to other tasks, since the image is stored on the drum. In addition, the image can be as complex as necessary, without adding any greater load to the display hardware: All images require the same display time.

Figure 8.4: A Raster Display System for Generating Movies

There are several advantages to the raster display system:

  1. It is possible to service several displays at the same time, once the original drum hardware is installed, with a low per unit cost for the extra displays. The additional hardware required is only an additional monitor scope and teletype. Presumably, the drum has been equipped with a large number of tracks, and each one can store one entire image.
  2. Although a direct light-pen attachment is not possible, the user can point to any place on the screen by means of a joy-stick, since the x-y information from the stick can be read from precision potentiometers, and the address displayed on the scope as a dot. Thus the user can still use a system for joining and drawing for definitional purposes.
  3. One of the greatest potential of the raster display system is the flexibility of the information that can be generated for display and transfer to film. To begin with, shading is quite simple. The computer need only be given the definition of the boundary of the area to be shaded, and the shading procedure then consists of setting all points inside the area to 1 -(assumed to be the bright code). Such shading is particularly important if more complex animation is desired. In particular, Mickey Mouse type of animation, where the figures are shaded-in areas on the screen require just such a shading capability. This is one type of occlusion problem. Since this is a rather onerous computational task, it seems that special hardware circuitry for the purpose of computing such shading addresses would be a worthwhile investment. There are other possibilities:
  4. Given the basic raster hardware, it is not difficult to add to the x and y addressing an option for setting a z coordinate that specifies the brightness to be displayed at each point. The BRAD system now in existence (Reference 7.1) has only a one-bit on-off brightness system, but the addition of two more tracks for each scope would allow an eight shade brightness control. This, would, of course, add again to the computational load, but the shading hardware circuitry could take care of some of the task when it shades in areas.
  5. The next logical step after the addition of shading 1s the addition of color. Again, this would require more tracks of information on the display drum, and an increased computational load, but the possibility would be there, and the change could be made without major revisions of the basic hardware.

Nothing has been said of how the images generated would be transferred to film. The fact is that the raster display system does not have the same display accuracy as the SC 4020: the raster is only 512 by 512 rather than the 1024 by 1024 of the SC 4020. It is possible that a printer-plotter similar to the SC 4020 could be set up with a raster display tube recording directly onto film. Another possibility is that the original computation be made at a higher resolution, and then recorded onto film at this high resolution. In either case, it would be necessary for the display tube used to record the information on film to be of the raster scan type also, since the vector-display type of tube is too slow.

Another possibility arises for the use of the generated images. Since it is likely that the animation produced will be intended for television broadcast, whether for educational or commercial purposes, it is quite possible that the output could be recorded directly onto magnetic video tape, and then be immediately available for replay.

Whether the output 1s recorded on video tape or not, the raster display system can be included in the output channel as the visual sequence is being generated. This would allow an on-line checkout of the sequence as it is generated. In this manner the maximum economy can be achieved in generating long visual sequences, which is the major requirement for a practical computer animation system.

Glossary

CEL:
A sheet of transparent acetate .005" thick.
CROSS DISSOLVE:
An optical (camera) effect in which one scene fades out as another fades in.
CUT:
The abrupt end of an action.
FADE:
An optical effect in which each frame receives less light as the lens is closed down.
FIELD:
The size of the area to be photographed.
FRAME:
The area occupied by a single exposure or picture in a strip of fi1m.
HOLD:
Keeping the same image for multiple frames.
LAP DISSOLVE:
A cross dissolve.
LIP SYNC:
Animation of mouth action to match dialog on the sound track.
MAGAZINE:
A film container.
ONES, TWOS, THREES:
The number of frames in which the same image is to be photographed.
OPTICAL EFFECTS:
These are usually created with the camera, and include such things as dissolves, fades, etc.
PAN:
Movement across the field.
RAV STOCK:
Unexposed film.
ZOOM:
A movement of the camera - either toward or away ffom the fixed background.

References

1.1 Penderghast, T F, Chairman: User's of Automatic Information Display Equipment (UAIDE) Computer Animation Committee, 1968 Year-End Report, pp 36-37.

1.2 Halas, J and Manvell, R: The Technique of Film Animation, Hastings House, New York, 1959, Pp. 171-209.

1.3 Knowlton, K C and Huggins, W H: Some Thoughts on Programming Languages for Computer Animation, 1968_Year-End_Report, UAIDE Computer Animation Committee, pp 65-72.

3.1 Programmers' Reference Manual, SC 4020 Computer Recorder, Document No. 9500056, Stromberg-Carlson Corporation, San Diego, California, 1965.

3.2 Fielding, R: The Technique of Special Effects Cinematography, Hastings House, New York, 1965, pp 25-46.

3.3 Color As Seen and Photographed, Color Data Book E-74, published by the Eastman Kodak Company, Rochester, New York, 1966, pp 34-38.

3.4 Fisk, C J: Color Film Production on the SC 4020, Programming Bulletin No, F-40, Sandia Corporation, Albuquerque, New Mexico, 1967.

4.1 Goldstein, H: Classical Mechanics, Addison-Wesley, Reading, Mass., 1959, pp 107-109.

4.2 Katzen, J: A Conceptual Three Dimensional Camera for Computer Animation, Master's Thesis (to be completed, Spring 1969).

4.3 Halas, J and Manvell, R: The Technique of Film Animation, Hastings House, New York, 1959, pp 319-323. 195.

4.4 Ramo, S and Whinnery, J R: Fields and Waves in Modern Radio, Wiley, New York, 1953, pp 138-140.

4.5 Atwood, S S,: Electric and Magnetic Fields, Wiley, New York, 1949, Pp. 88-90.

5.1 0strand, T.J.: An Expanding Computer Qperating System, Master's Thesis, University of Pennsylvania, Philadelphia, 1966.

7.1 Ophir, D, Rankowitz, S, Shepherd, B J, and Spinrod, R J: BRAD: The Brookhaven Raster Display, CACM, 11:6 (June) 1968.

8.1 Zajac, E,E: Computer Graphics, unpublished class notes for E.E. 398, a course given at Polytechnic Institute of Brooklyn:

8.2 Kubert, B, Szabo, J, and Giulieri, S: The Perspective Representation of Functions of Two Variables, JACM, 15:2 (April) 1968, pp 193-204.

List of Tables

Table 7.1: Cost of the Non-computer Related Items Involved in the Production of the Fields and Waves Movie

Table 7.2: Computer Costs for Checkout and Production of the Fields and Waves Movie

Table 7.3: Overall Cost Summary for the Production of the. Fields and Waves Movie

Table 7.4: Total Cost of Film Production and Software Development

List of Illustrations

Figure 2.1: The Form Used in the Sequence Story Board. A Typical Frame is Sketched in the Screen Outline

Figure 2.2: The Accordion, Showing a Sequence Depicted by Typical Frames and Accompanying Narrative

Figure 2.3: Sample of the Script Before the Cues Have Been Measured. Note that the lines of each page have been numbered in order to allow easy reference during retakes.

Figure 2.4: The Cued Script. Note that the cues are each measured in feet(') and frames (x)

Figure 2.5: Example of the Synchromised Color and Sound Action Flowchart (OD refers to a numbered Object Definition; TC refers to a Timing Chart)

Figure 2.6: Example of an Object Definition. Giving Exact Specification of the Red Image to be Generated for Scene 3

Figure 2.7: Timing Chart foe Electric Field Intensity Variation Caused by Voltage Difference Between Two Wires

Figure 2.8: Timing Chart for Sinusoidal Incident and Reflected Waves with ρ=+1

Figure 2.9: Timing Chart for Sinusoidal Incident and Reflected Waves with ρ=-1

Figure 3.1: Schematic Diagram of the Stromberg-Carlson SC 4020 Printer-Plotter Showing Major Components and Flow of Instructions

Figure 3.2: Format of the Vector Generating Commands of the SC 4020

Figure 3.3: Charactron Shaped Beam Tube

Figure 3.4: Approximate Shape of a Line Drawn on Film by the SC 4020

Figure 3.5: Typical Set of Black and White Images Printed Through Filters to Make Composite Color 16 mm Film Image

Figure 3.6: The Oxberry Aerial-Image Animation System

Figure 3.7: Relative Sizes of 16 mm and 35 mm Film Images and Position of Sprocket Holes

Figure 3.8: Example of the Solution of the Occlusion Problem by Color Overprinting

Figure 4.1: Basic Scaling of the Camera Image Plane

Figure 4.2: Example of Object Defined by Subroutine and Photographed by the Virtual Camera

Figure 4.3: A (Presumably Microscopic) View of the Camera Picture Plane Onto Which All Points Are Projected

Figure 4.4: Geometry of the Perspective Projection of a Point in the Camera Field of View

Figure 4.5: From body to camera coordinates via the fixed reference coordinates

Figure 4.6: Typical Spatial Relation Between the Three Coordinate Systems

Figure 4.7: Time History of the Velocity of an Object Changing Position vith Fairings

Figure 4.8: Coding of Ar1iws Representing the Various Intensities of a Normalized Vector Field (d.e., Highest Intensity Is One)

Figure 4.9: Typical Frame of Film Showing Sinusoidal Wave and Intensity Coded Electric Field Lines

Figure 4.10: Cross-Sectional Geometry of the Two Parallel Conducting Wires

Figure 4.11: Determination of the Radius of Curvature and Center of the Arc of an Electric Field Line

Figure 4.12: A Set of Electric Field Lines with Spacing Determined by Equal Flux Between Each Two Lines

Figure 4.13: Magnetic Field Lines Around Conducting Wires, the Lines Being Determined by Equipotential Curves

Figure 4.14: A Set of Magnetic Field Lines Around Two Conducting Wires with a Current

Figure 4.15: Perspective View of the Representation of a Flame-Polarized Wave in Space

Figure 5.1: Flowchart of the Function Plotting Logic. The parameter names used are explained in the text.

Figure 5.2: Example of a Production Schedule

Figure 5.3: Definition of the Scaling Constants for the Camera Image Plane

Figure 5.4: Object Definition for Rolling Circle

Figure 5.5: Scaling Argument Specifications

Figure 5.6: Example of a Scaling Error

Figure 5.7: Example of the Use of LINEV

Figure 5.8: Example of the Printer-Plot Output Used for First-Run Checkout

Figure 6.1: A Four-way Synchronizer

Figure 6.2: Conventional Cutting Room Signs, Marked in White China Marker Directly on the Workprint by the Editor

Figure 6.3: "Checker Board" Assembly of 16mm Masters to Produce Invisible Splices, Fades, and Dissolves. (Note that cross hatch denotes black film.)

Figure 8.1: Summary of the Steps Involved in Producing a Full Color, Computer Animated Motion Picture with a Soundtrack

Figure 8.2: Frame Drawn by Sample Program in Movies Language

Figure 8.3: Movie Generating System Using Existing Hardware

Figure 8.4: A Raster Display System for Generating Movies

Bibliography

1. Anderson, S E: CALD and CAPER Instruction Manuals (plus addenda), May 1967. Published at Syracuse University.

2. Appel, A: "The Visibility Problem and Machine Rendering of Solids", IBM Research Report RC 1618, May 20, 1966.

3. Atwood, S S: Electric and Magnetic fields, Wiley, New York, 1949.

4. Baddeley, W H: The Technique of Documentary Film Production, Hastings House, New York, 1963.

5. Bork, A M: "Quantum Mechanical Harmonic Oscillator: A Computer-Produced Film", The American Journal of Physics, vol. 34, No. 6, p. l470.

6. Brenton, R G: Computer Generated Motion Pictures, Science 155, p. 1662 (1967).

7. Color as Seen and Photographed, Color Data Book E-74, published by the Eastman Kodak Company, Rochester, New York, 1966.

8. Deily, D, Everlof, B, Purdom, T, and Rubinoff, M: Electromagnetic Fields and Waves, Part I: Transmission Lines, a documentary film produced by the Moore School Information Systems Laboratory, released November 1960.

9. East, Douglas A.: "Computer Animation", Industrial Photography, Vol 16, March 1967.

10. Fielding, R: The Technique of Special Effects Cinematography, Hastings House, New York, 1965.

11. Fischetti, J: Saveit, Routines for Saving, and Regenerating S-C 4020 Records, UAIDE, 1967.

12. Fisk, C J: Color Film Production on the SC 4020, Programming Bulletin No. F-40, Sandia Corporation, Albuquerque, New Mexico, 1967

13. Goldstein, H: Classical Mechanics, Addison-Wesley, Reading, Mass, 1969.

14. Halas, J and Manvell, R: The Technique of Film Animation, Hastings House, New York, 1959.

15. Halda, E J: "Computer Output in the Form of an Animated Color Movie", UAIDE, 1966, Sec. I, pp 1-5.

16. Hendricks, L: Color Development for the SC 4020, Sandia Corporation, Albuquerque, 'New Mexico, (company bulletin, undated).

17. Holden, J C: "Computer-Animated Movies", Emerging Concepts in Computer Graphics, January 1968.

18. Huggins, W H: "FORTRAN IV Program for a Film about Moving Rectangles", Conference on Computer Animation, July 1967.

19. Katzen, J A Conceptual Three Dimensional Camera for Computer Animation, Master's Thesis (to be completed, Spring 1969).

20. Knowlton, K C: "A Computer Technique for Producing Animated Movies", AFIPS Conference Proceedings, Vol. 25, April 1964,

21. Kubert, J, Szabo, J, and Giulieri, S: The Perspective Representation of Functions of Two Variables, JACM, 15:2 (Apr11) 1968, pp 193-204.

22. Meily, H E and Davis, R N: "CAMP--Computer Animated Movie Procedures", UAIDE, 1967, pp. 23-33.

23. Noll, A M: "A Computer Technique for Displaying N-Dimensional Hyperobjects", CACM, August 1967.

24. Ophir, D, Rankowitz, S, Shepherd, B J, and Spinrod, R J: BRAD: The Brookhaven Raster Display, CACM, 11:6 (June) 1966.

25. Ostrand, T J: An Expanding Computer Operating System, Master's Thesis, University of Pennsylvania, Philadelphia, 1966

26. Perkel, D H: "Neuro-Electric Activity Displeyed by Computer-Produced Films", UAIDE, 1966, Sec. XIV, pp. 1-7.

27. Programmers' Reference Manual, SC 4020 Computer Recorder, Document No. 9500056, Stromberg-Carlson Corporation, San Diego, Ca1if, 1965.

28. Quann, J J and Chapman, G .: Computer Generated Motion Pictures for Spece Research, CSC Report, 5:1, 1968.

29. Ramo, S, and Whinnery, J R: Fields and Waves in Modern Radio, Wiley, New York, 1953.

30. Reisz, K.: The Technique of Film Editing, Hastings House, Ne York, 1958.

31. Report from Bell Laboratories - Movies Via Computer", Scientific American, March 1968, Vol. 218, No 3, p 13.

32. Weiss, R A: "BE VISION, a Package of IBM 7090 FORTRAN Programs to Draw Orthographic Views of Combinations of Plane and Quadric Surfaces", JACM, 13 Apr11 1966, pp. 194-20l.

33. Welch, J E: "Moving Picture Computer Output", UAIDE, 1965, Sec. XXI, pp 1-18.

34. Witte, B F W: Algorithm 332, Jacobi Polynomials, CACM, 11:6 (June) 1968.

35. Zajac, E E: "Computer Animation: A New Scientific and Educational Tool", Journal of the SMPTE, Vol. 74, November 1965, pp. 1006-1008.

36. Zajac, E E: Computer Graphics, unpublished class notes for E.E. 398, a course given at Po]ytechnic Institute of Brooklyn; a technique of perspective projection is given, but no conceptual camera defined,

37. Zajac, E E: "Computer-Made Perspective Movies as a Scientific and Communication Tool", CACM, Vo1. 7, 1964, pp 169-170.

Films Containing Computer Animation

1. Bell Telephone Laboratories: A Computer Technique for Producing Animated Movies, 16 mm, silent, B/W, 8 minutes.

2. Bell Telephone Laboratories: Two Paradoxes.

3. Bork, A A: Quantum Mechanical Harmonic Oscillator,

4. Commission on Engineering Education: Movies From Computers - An Interium Report, 16 mm, sound, B/W, 20 minutes.

5. Dworkin: Diffusion - Part I, and Diffusion - Part II, Bell Telephone Laboratories.

6. Educational Development Center (EDC): Kinematics (distributed in six sections, each in B/W, l minutes 1ons: Velocity ~. Acceleration Vectors, Velocity in Circular and Simple Harmonic Motion; Velocity and Acceleration in Circular Motion, Velocity and Acceleration in Simple Harmonic Motion, Velocity and Acceleration in Free Fall,)

7. EDC: Wave Velocities and Dispersion,

8. Huggins, W. H.: Harmonic Phasors, Johns Hopkins University.

9. Knowlton, K C. and Sinden, F: Computer Animation Examples, Bell Telephone Laboratories.

10, Knowlton, K C: The L Language, Bell Telephone Laboratories.

11. Lumley, J: Eulerian and Lagrangian_Description, EDC production.

12. Michaels, G,: Computer Studies 6f Fluid Dynamics, Los Alamos Scientific Laboratory.

13. National Film Board of Canada (N.F.B.C.): Advance of the Perihelion, distributed by Encyclopedia Britannica Films.

14. NFBC: Computer I, and Computer II, two short films showing the use of a computer graphics terminal in the study of planetary orbits.

15. NFBC: Kepler's Laws

16. NFBC: Rutherford Scattering.

17. NFBC: Superposition

18. Sandia Corporation: ACCEL - Automated Circuit Card Etching Ley-out, 16 nun, sound, B/W 20 minutes.

19. Sandia Corporation: Sandia Color Loop, 16mm silent color short continuous loop

20. Schwartz, Schey, Goldberg: 1. Quantum Mechanical Scattering in One Dimension (distributed in four separate sections) 2. Free Wave Packets. 3. Particle in a Box. All produced by EDC.

21. Schwartz, Schey, and Goldberg: Scattering of Quantum Mechanical Wave Packets from Potential Wells and Barriers, 10 minutes, silent, Livermore Laboratories, University of California.

22. Sinden, F W: Force, Mass and Motion, Bell Labs, Sound, B/W, 10 minutes

23. Stanford Research Institute: Collection of Short Animated Sequences, 16 mm, silent, B/W, 10 minutes.

24. Stromberg Datagraphics Corporation: Computer Made Movies as a Scientific Communications Tool, 16 mm, silent, B/W, 8 minutes.

25. Stromberg Datagraphics Corporation: A World Orbit Display Using the SC4020, 16mm, Sound, color, 20 minutes.

26. ZaJac, E. E.: Simulation of a Two-Gyro Gravity Gradient Attitude Control System, sound, N/W, 6 minutes, Bell Telephone Laboratories. short continuous loop.