Contact us Heritage collections Image license terms
HOME ACL ACD C&A INF SE ENG Alvey Transputers Literature
Further reading □ Overview □ 1993 □ 2829303132 □ 1994 □ 333435363738 □ 1995 □ 394041 □ 1996 □ 50
CCD CISD Harwell Archives Contact us Heritage archives Image license terms

Search

   
InformaticsLiteratureNewslettersGraphics & Visualization
InformaticsLiteratureNewslettersGraphics & Visualization
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
1993
2829303132
1994
333435363738
1995
394041
1996
50

Issue 32

December 1993

Graphics Coordinator Report

AGOCG and Multimedia Applications Support

At the time of writing this report, AGOCG have submitted a proposal to the HEFC's JISC New Technologies Initiative. This is for a co-ordinated initiative for multimedia. The intent of this initiative is to offer the community support in this new area of technology in the way that AGOCG has in other areas. The proposal includes a support officer and a series of projects involving evaluations, assessments of the market place and technologies which emerge over the period of the initiative. Should this project receive funding then we will be looking to put in place the recommendations of the workshop on "Multimedia in Higher Education: Portability and Networking" which is taking place at the start of December. The results of the workshop will be produced as a technical report. If you wish to get a copy of this report as soon as it is out (around Christmas hopefully) then please send a request to Joanne Barradell for AGOCG Technical Report 24.

New Media

AGOCG commissioned a report on new media which is now available as an AGOCG technical report. This report is entitled The Exploitation of New Media for Text, Graphics, Images and Sound and has been written by Rae Earnshaw and Alan Haigh of the University of Leeds. To obtain a copy please send a request for AGOCG Technical Report 23 to Joanne Barradell.

Colour in Computer Graphics

A set of slides on the subject of colour in computer graphics produced by Lindsay MacDonald is available for £50 + VAT for UK higher education sites.

Colour is now a standard feature of all computer graphics systems, not only in monitors but also in hardcopy printers. The proper use of colour can bring tremendous benefits for the user, improving the efficiency of the user interface and emphasising the important aspects of the information or message of the display. All too often, however, we see colour mis-used or over-used to the extent that in some cases interpretation of the display is rendered very difficult.

This slide set, arranged in three modules of 12 slides each, will be valuable for both teachers and product designers in demonstrating various aspects of colour. The first module illustrates the basics of human colour vision and demonstrates some of the perceptual factors that affect the appearance of colour. The second module describes the principles of colour models used in computer graphics and how they can be visualised. The third module provides examples of how to use colour effectively in displays for both information presentation and user interfaces.

Included with the slides are detailed notes that could form the basis of up to three lectures on the subject, plus a bibliography for each module suggesting up-to-date books for further reading.

Contact me for an order form.

Image Processing Training Materials

These are proving popular and provide a starter pack for people wishing to introduce the ERDAS and VISILOG packages. The training materials come as a set including materials for both packages for the lecturer (sets of OHPs and notes) and the student (workbooks). The materials include masters for your own onsite copying. Contact me for an order form. The cost is £50.

Anne Mumford

AVS Training Materials

A set of training materials are now available for AVS and this short note describes their content and information on how to obtain them.

What do the materials contain?

The following documents comprise the AVS introductory and advanced courses developed as part of the Advisory Group on Computer Graphics (AGOCG) Visualisation Support Project at the Computer Graphics Unit, Manchester Computing Centre, University of Manchester.

There are also a number of data files and modules which support the practical exercises described in the course notes.

How do I obtain the materials?

The materials are available in postscript format along with the supporting data files and modules via anonymous FTP from the University of Manchester (ftp.mcc.ac.uk) or the International AVS Center (avs.ncsc.org).

For example to obtain the materials from the University of Manchester you would first type the following

ftp ftp.mcc.ac.uk

When you are connected to the server you should login in as anonymous and supply your email address as the password. To access the training materials you must move to the subdirectory pub/cgu/avs/avs_course and set the transfer to binary mode before getting the files:

ftp> cd pub/cgu/avs/avs_course 
ftp> binary 
ftp> mget *

What do the files contain?

The following files can be found in the avs_course subdirectory:

Reproducing and using the materials

The following acknowledgement should be made when using these materials or if you develop any documents based on these materials:

The original AVS training materials were developed as part of the Advisory Group on Computer Graphics (AGOCG) Visualisation Support Project at the Computer Graphics Unit, Manchester Computing Centre, University of Manchester, United Kingdom.

Copyright of the material remains with the University of Manchester but these materials may be freely copied for educational use by staff and students in academic institutes. Commercial users and/or companies should first contact the Computer Graphics Unit.

Any comments or suggestions

Any comments or suggestions on the training materials should be sent to Computer Graphics Unit, Manchester Computing Centre.

Steve Larkin

CGM Questions and Answers

This column will answer some of the frequently asked questions about CGM. It can also be found on the CHEST -CGM list at Mailbase. To join send the following message to mailbase@uk.ac.mailbase:

subscribe chest-cgm <first_name><second_name>

Question 1:

Why do I sometimes lose all or part of the picture when viewing a CGM?

Answer:

This is often caused by the CGM failing to specify the background colour. CGMs from early versions of Uniras, for example, do not specify the background colour. This is bad practice and can lead to parts of the picture not being visible.

CGM has the capability to specify background colour but some applications do not use it when they generate CGMs. Instead they assume that the background colour is black and the foreground colour is white, or vice versa. This leads to lines and text disappearing on systems such as Macs and Windows where the default background colour is white.

The CGM standard does not specify a default background and foreground colour. It states that they are device dependent, but must be different. This is deliberate in order to allow for systems which have black as the default background, DOS screens for example, and for systems which have white as the default background, hardcopy or Mac or Windows screens for example.

If just colours 0 and 1 are used (backgound and foreground) and neither is explicitly set to a particular colour then an interpreter should give sensible results in both cases, ie white on black or black on white, depending upon the default for the device. The trouble starts when colour 1 is specified as say, white, but colour 0 is not specified. This works OK for a device with a default black background, but for a device with a default white background, would yield white on white, ie nothing is seen. This is why some picture elements sometimes cannot be seen.

When generating a CGM you should ensure that either both colours 0 and 1 are defined or that neither is defined. This avoids the problem and is recommended practice.

If you did not generate the CGM, and so cannot control which colours are defined, you probably still want to be able to view the picture. Many interpreters allow the background for a CGM to be set to a particular colour. For example, UPM allows this when reading in a CGM. If things seem to be missing I set the background to 50% grey. That way I can always see lines, text, etc whether they are drawn in black or white.

Alan Francis, AGOCG CGM Support

Teaching Aids for an Introduction to HyTime using SGML and CGM

SGML Project Pilot Study for AGOCG

This article describes the work undertaken to convert the text of a large set of slides and lecture notes, used to provide an introduction to the Hypermedia/Timebased Structured Language (HyTime), from the Standard Generalized Markup Language (SGML) to PostScript format. The work itself was funded by AGOCG to investigate the practicalities of creating teaching materials in a form independent of the platform they were written on and the solution to this is to use a mixture of SGML and CGM.

SGML is an internationally agreed standard for the encoding of information in a way that is independent of the machine, application, language or system being used. This makes it possible to move files created by SGML-aware software on one computer, without any loss of information or intervention on behalf of the user. The Computer Graphics Metafile (CGM) is another international standard, which aims to provide the same amount of portability for graphics.

The main problem with the SGML document instance is that it is not easily converted to PostScript. This means that it has to be translated to some intermediate format before being converted to PostScript.

The pilot study undertook the task of producing two translating systems, one to be run on UNIX systems and the other to be run on DOS systems (Perl needs at least 4MB of RAM to run). The slides and lecture notes were written by an external SGML consultant and were changed slightly to conform to the Universities standard guide on the layout of slides produced by AGOCG.

There are four steps that need to be undertaken in order to complete the conversion:

  1. Develop the text of the slides.
  2. Develop the graphics of the slides.
  3. Generate the slides in an intermediate format.
  4. Produce the PostScript output.

The UNIX System

The first step is to produce a Document Type Definition (DTD) and then to produce an SGML document instance conforming to that DTD (the document instance is the text of the slides marked up with tags conforming to the DTD). A validating parser (e.g. Sgmls V1.1) checks that the document instance conforms to the DID and if so, produces an Element Structure Information Set (ESIS) file which contains fully marked up text. A file containing the mapping information between the tags declared in the DID (and used in the document instance) and the chosen word processing language (in this case Latex) also needs to be set up. The main conversion script is written in Perl, a scripting language that is a cross between shell scripts and C.

The diagrams needed for the slides were produced in the Computer Graphics Metafile (CGM) format by Uniras V6.3b and converted to Encapsulated PostScript (EPS) by the freely available converter RALCGM V3.00. Latex incorporates EPS files automatically into the text so there is no interaction with the user, and produces the required PostScript format as output.

To summarlse:

  1. An editor was used to create the DTD and the SGML document instance file.
  2. Uniras V6.3b was used to create the CGM files.
  3. Ralcgm V3.00 was used to convert the CGM files to Encapsulated PostScript.
  4. Sgmls V1.1 was used to create an ESIS file from the DID and SGML document instance files.
  5. Perl V 4.0.1.4 patchlevel 10 converted the ESIS file into latex.
  6. Latex incorporated the EPS files and the latex file together and produced PostScript output.

NB. A UNIX Makefile was used to automate steps 3 to 5 above.

Create Mapping file Create DTD file Create SGML document instance conforming to the DTD sgmls perl latex file latex Postscript Create CGM files (uniras) ralcgm EPS files

>Figure 1: The UNIX processing flow

Create Mapping file Create DTD file Author/Editor sgmls perl WordForWindows Postscript Create CGM files

Figure 2: The DOS processing flow

The DOS System

The DTD was created using an editor and was then used by Author/Editor (an SGML-aware package) in order to create the SGML document instance. The fully tagged document instance file produced by Author/Editor was then used by Sgmls V1.1 to create the ESIS file used by the Perl script. The mapping file was also set up (in this case a conversion to Rich Text Format(RTF) which most PC based word processing packages can import).

The CGM files were produced by AC-Cricket Presents (but could also have been produced by Uniras on the UNIX system and transferred to the PC) and Microsoft Word For Windows V2.0 was used to import the RTF file generated by the Perl script. Each CGM picture had to be incorporated separately into the text by the user as specified by comments left by the Perl script. PostScript output could then be produced from within the word processing package.

To summarlse:

  1. An editor was used to create the DTD file.
  2. The SGML-aware package, AuthorjEditor, was used to create the SGML document instance file conforming to the DTD.
  3. CA-Cricket Presents was used to create the CGM files (Uniras could also have been used).
  4. Perl was used to convert the SGML document instance produced by Author/Editor to a Rich Text Format(RTF) file.
  5. Microsoft Word For Windows V2.0 was used to read in the RTF file and the user had to insert the pictures manually into the areas reserved for them by the Perl script. It was then used to produce an Encapsulated PostScript file.

As a check on the versatility of the main Perl script, the slides were also converted to Nroff (another UNIX based text formatter) just by changing the mapping file. The final report was written as an SGML document instance using a different DTD and a different mapping file, but with the same Perl script, which again shows the versatility of the system.

The slides can be totally rewritten but, as long as the new set conforms to the DTD, the same main Perl script and mapping file can be used to produce PostScript output (both on UNIX and DOS systems). If someone wishes to produce their own material, the main Perl script can be used but a different mapping file will need to be created.

A complete set of scripts, files and the main report will be made available from the ftp site sgml@uk.ac.exeter.

Tony Holtham, The SGML Project, University of Exeter

IEEE Visuaization '93

25-29 October 1993 San Jose, USA

Summary

Visualization 93 took place 25-29 October 1993 in San Jose, California, and attracted 560 delegates to the week's program of Tutorials, Panels, Papers and Case Studies. It was also held in conjunction with the first Symposium on Research Frontiers in Virtual Reality, 25-26 October. This will be the subject of a further report in the future. This first article reports on the Keynote address; the report on the Conference will be continued in the next issue.

Report

Visualization 93 comprised of 2 Workshops, 9 pre-Conference Tutorials, and the main Conference sessions of papers, panels and case studies. There were also three additional evening sessions on "How to Lie and Confuse with Visualization" (yes, all the bad examples of mispresenting data you thought you knew about - and more!), Research Problems in Visualization (to compile a list of current problems that people are working on, or should be worked on), and Data Visualization: Research Issues, Applications, and Future Directions (a summary of the work of a Panel set up by the Office of Naval Research).

The Keynote Address on A Vision for Visualization was given by Prof Fred Brooks, Kenan Professor of Computer Science at the University of North Carolina at Chapel Hill. Dr Brooks was Corporate Project Manager for the System/360 at IBM, including the development of the System/360 family hardware and the Operating System/360 software, for which he shared the National Medal of Technology with Bob Evans and Erich Bloch, and for which he received the IEEE Computer Society MacDowell A ward. His research has been in computer architecture, software engineering, and interactive 3D computer graphics. His best known book is The Mythical Man Month: Essays on Software Engineering which has sold over 200,000 copies.

He reviewed the following topics:

Visualization is currently a collection of ad hoc techniques, but currently there was no detailed reference book on How to do it for different types and classes of data. The role of art, aesthetics, and perception also needs to be taken into account.

Motives and modes of visualization depend on the nature and purpose of the visualization. When considering the question "What does a protein look like?" there were a number of related questions that have to be considered such as: Look at this visualization! What can I see in this visualization? What can you see in this visualization? These points may be summarised as:

  1. Presentation graphics.
  2. Exploratory graphics to enable a scientist to gain more knowledge - for example, visualizations that suggest hypotheses for further investigations and experiments.
  3. Publication of the visualization and the data sets to enable other scientists to use the data for their own purposes. Indeed, the provision of the data should be a precondition of the acceptance of the paper, as it is in the molecular biology field.

There was also a considerable difference between the visualization of a protein and the visualization of an architectural design. For the former, no-one really knows what it looks like, whereas the latter can be verified by comparing a video of the simulation with a real video taken when the building is constructed. This offers some quantitative measure of the success of visualizations during the earlier design stages.

When considering visualization as not just a picture, but also an experience, we should consider the use of time (Le. moving images) as a user-controlled variable, the user-control of the viewpoint, the user-enquiry of values, sound and sonification, the use of force and touch, and the use of real models. What kinds of abstract variables can sound be used for? How much freight wiIl the haptic senses carry in conjunction with visual images? How much can it be used to effectively complement visual images? Stereo lithography can be used to produce real, physical models which can be touched and felt. What extra information can this provide to scientists who are used to working with such real models? Scanning tunnel microscopes can utilise the haptic senses in conjunction with the visual.

How much of the visualization design space have we currently explored? Designed visualizations involve keeping track of the exploration. Scientists often use sketches as part of their exploration of science. In Bergman's data-sketching concept the user supplies the topology using point coordinates from a database. Can these be incorporated into visualizations?

Some current questions on techniques included immersion versus 'through the window' graphics, stereopsis, and multimodal interaction. Virtual reality systems claimed to give added benefits but there was little serious study of the actual quantitative benefits for various application domains. Immersion seemed to provide intuitive navigation in that the user could change their viewpoint without thinking, whereas for window / screen systems this was not the case. We don't yet really know how much immersion can do over and above conventional interactive graphics systems. Stereopsis may well be overrated, for example in architectural drawings. For molecular modelling it was a disadvantage, because often the scientist wanted to know if one part of the structure was physically parallel to another. With regard to multi modal interaction, we needed to think about geometry, color, depth cues, sound, and forces. Colour was not three independent variables. Could two independent variables be carried by colour? Possibly. Texture could also be used to characterise independent variab1es.

We think in real 3D space, but it is not so easy to think in transform space or abstract spaces. Users seem to get lost more easily in model space, and it is not at all clear that this is simply due to having a more limited field of view. Generally more screen space is needed to enable the user to see global views of the image at the same time as manipulation is being done on part of it (e.g. via zoom). Navigation could be concerned with moving the viewpoint in 3D space or wayfinding in 3D space, or both. Manipulation of virtual objects is more easily done by a 6D interactive device (e.g. trackball) than a 2D mouse. There are the issues of positional versus velocity devices, device overloading and kinesthetic memory, and the potential of using pointing devices for nouns and voice input for commands.

Music is often used as part of the sound track on visualization videos, yet we all believe that it carries messages deep into the mind. What are these messages and are they a fair commentary on the technical content of the images on the video? Unless the messages are intentional and the coding is accurate, then the music will lower the signal! noise ratio of a visualization. Often the choice of music was influenced by 'show business' factors. Are we informing or entertaining? Are we sacrificing accuracy for aesthetics in some cases?

Truthfulness is becoming a key issue in visualization. In entertainment the participants are encouraged to willingly suspend disbelief. Visualization has an obligation to be completely truthful and inform accurately. If we really inform, we will also impress. There have been some important simulations and visualizations where what is shown in the visualization is not exactly the same as the simulation. Other entities have been added to the visualization e.g. houses, trees etc, which were not present when the simulation was done. The pictures look pretty, but truth has been sacrificed.

Some planetary simulations have the vertical dimension exagger-ated by up to 500% in order to enable t~e terrain to be clearly seen. This should not be done without a clear statement on the video. Otherwise all the viewers in educational establishments who see the video get a mistaken picture of the aspect ratio.

Rae Earnshaw

IRIS Explorer User Group Meeting at Eurographics '93

A meeting for users of IRIS Explorer, the modular scientific visualization system, was held during Eurographics'93 in Barcelona, Spain.

The meeting was held 8th September 1993 at the Palau de Congressos and was chaired by Ken Brodlie from leeds University. Ken is a lecturer in the School of Computer Studies, and is president of the European IRIS Explorer User Group.

The first speaker was Brian Ford of NAG Ltd who described the background to NAG's work with IRIS Explorer. NAG have been producing software for scientists and engineers since the first appearance of their numerical library more than twenty years ago, and have since produced a variety of statistical, graphical and symbolic software products. In 1992, NAG contracted with Silicon Graphics to port IRIS Explorer to Sun, IBM, HP and DEC platforms. Progress on the porting work was described by Robert lIes from NAG, who drew attention to the Sun version of IRIS Explorer which was being demonstrated on the NAG stand at the exhibition. Robert also outlined the pricing policy which NAG have adopted for the initial launch period of their ports, and concluded by introducing the new IRIS Explorer Centers which have been set up in the UK and North America to act as "one-stop shops" for information, distribution and support for IRIS Explorer (see below for contact details).

The third speaker at the meeting was Bob Brown (manager of the IRIS Explorer development team at SGD who discussed the new features that have gone into the recently-released Version 2.0 of IRIS Explorer. These included:

In addition, many new modules have been added to the default set which facilitates the display of vector datasets, annotation and improved support for picking in the window of the Render module. The source code to more modules (including Render) was provided in Version 2.0, which will help users who were developing their own modules.

Bob emphasised the way in which the new features made possible (or easy) the creation of animations of dynamically evolving scenes. For example, the improvements in the firing algorithm, the introduction of loops and the scripting language, as well as the new modules which supported the generation and storage of animations.

Jeremy Walton from NAG then described some of the technical aspects of NAG's ports. These are currently based on Portable Graphics' version of Gl, although future ports will use OpenGl and Portable Graphics' implementation of IRIS Inventor, as versions of this become available. Current plans for the Cray port of Version 2.0 were announced; this is being performed by Cray, but will be supported and updated by NAG, and is available through the IRIS Explorer Center. Unlike the previous Cray port, this version will incorporate the DataScribe and Module Builder as well as the Map Editor. Only the Gl-based modules (e.g. Render) will not be provided, though these may be produced at a later release.

Jeremy said that a roughly annual major update cycle for IRIS Explorer was planned, with the module suite being updated more often. He concluded by describing some of NAG's work in developing new modules, such as the new 20 graphics module which was being demonstrated on the NAG stand; this is based on NAG's graphics library, and it is envisaged that other NAG products will form the basis for new module suites.

The final speaker was Chris Harris of Du Pont Pixel who introduced PX/IRIS Explorer. This is a port of IRIS Explorer 2.0 for Sun, and is based on Du Pont's PX/IRIS GL library and their PX/IRIS Inventor package (each of which are also available as separate products). The software version of PX/IRIS GL works with Sun graphics boards, although a high-performance hardware version which takes advantage of Du Pont Pixel accelerator cards for Suns is also available.

The final section of the meeting was a question and answer session with the presenters on the rostrum and Ken Brodlie deftly maintaining some semblance of order before everyone moved off to enjoy the refreshments (kindly provided by SGI Madrid) and to play with their IRIS Explorer worry beads.

Thanks to all who presented at the meeting, and to those who came along to hear about what was happening with IRIS Explorer. If you have any queries about the meeting, or IRIS Explorer in general, please contact one of the the IRIS Explorer Centers (Europe or USA).

Jeremy Walton, NAG
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site