Contact us Heritage collections Image license terms
HOME ACL ACD ICF SUS DCS G&A STARLINK Literature
Further reading □ OverviewMUM BenchmarkSTARLINKGraphics StandardsContouring3D HistogramsImplementing GKS
C&A INF CCD CISD Archives Contact us Heritage archives Image license terms

Search

   
ACDLiteraturePapers
ACDLiteraturePapers
ACL ACD C&A INF CCD CISD Archives
Further reading

Overview
MUM Benchmark
STARLINK
Graphics Standards
Contouring
3D Histograms
Implementing GKS

STARLINK

Mike Lawden

June 1996

Starlink General Paper 31.11

1. STARLINK

Support for astronomical computing in the UK

Starlink was set up in 1980 to help astronomers use computers to analyse their observations. It is:

Starlink's main objectives are to:

Work supported

The types of astronomy supported by Starlink are:

Work on Starlink computers has the following priorities:

  1. Astronomical data reduction that can only be done interactively.
  2. Other astronomical data reduction.
  3. Other astronomical research work that can only be done interactively.
  4. Other astronomical research work.
  5. Other PPARC astronomy work.

(In this list astronomical data includes data obtained from telescopes and data obtained from theoretical calculations made on other computers.)

The following types of astronomy are not supported by Starlink:

These types of work are normally supported by other sources of funding, such as PPARC grants.

Sites

Starlink facilities are concentrated at a number of sites spread around the country. Details of individual sites are given in Appendix B.

Day-to-day management of a Starlink site is the responsibility of its Site Manager, who may be helped at large sites by an Assistant. The overall management of a site is monitored by a Site Chairman, accredited by the Project.

How many sites are there? - That depends on your viewpoint:

Geographical
The obvious one. We call them Sites. There are 26 of them. The Cambridge site is split between IoA/RGO and MRAO, which are a 10-minute walk apart but regarded as a single Site.
Administrative
The administrator's view. Its primary criterion is the allocation of long-term on-site management. We call them Nodes. There are 26 of them. But these are not quite the same as the 26 sites. The sites at Armagh and Belfast are regarded as the Northern Ireland node, while the RAL site contains two nodes: the Project node and the RAL - Astrophysics node.
Operational
This is the way things look when you want to send an e-mail message or count user numbers. We call them User-Centres. There are 28 of them. The Northern Ireland node has two user-centres (Armagh and Belfast), and the Cambridge node also has two user-centres (IoA/RGO and MRAO). (In practice, the word site is often used to refer to a user-centre.)

The thing to remember is that Northern Ireland has one node and two sites, RAL has one site and two nodes, and Cambridge has two user-centres. All the other sites have one of everything.

Remote User Groups

In addition to Starlink Nodes, there are Remote User Groups (RUGs). A RUG does not have funding for long-term on-site management but receives, or expects to receive, hardware and software support from Starlink. It may also receive remote site management effort from other sites. There are 5 RUGs, at present, which have not been counted in the numbers given above.

Central facilities

A further Starlink facility is a central computer system at RAL which stores astronomical catalogues and proprietary software which is licensed only for this one machine. Any Starlink user may use it from any site, via a network connection.

Users

In May 1996 there were 1948 Starlink Users. Details may be found in Appendix B, which also includes a chart showing the history of user numbers since the start of the Project.

Users are classified in several ways. The most commonly used are:

Uniqueness

This is used to avoid double counting. The problem is that some users are registered at more than one site. This means that if you add up the number of users registered at each site, the grand total will be higher than the actual number of users. To solve this problem, each site classifies its users in two ways:

  • Primary
  • Secondary
Type

This is used to help us estimate the number of users who do different types of work. It helps us allocate resources rationally. The Types are:

  • r - Research astronomer actively processing data.
  • t - Scientific and technical support staff.
  • o - Others, like secretaries.
  • u - Undergraduate engaged in an astronomical research programme.
  • a - Associate user (does not have an official position in a university or similar institution).
  • f - People who are usually not resident in the UK, such as visitors.
Community

Users are allocated a:

  • Location code Not all users work at the same place or in the same department as that in which a Starlink site is located. This code helps us get a better idea of the size and location of the different communities of workers we are supporting.

Application forms for prospective Starlink users are available from the Site Manager of the site they wish use.

Postgraduate students will need it countersigned by their supervisor. Undergraduates, unfortunately, cannot be accepted, except for collaboration in research aimed at publication, as Starlink is not funded for educational work. Applications from people not associated with an HEI, for example a serious amateur astronomer doing research, are considered on a case-by-case basis. Such a person should normally be working in collaboration with astronomers at an HEI.

All applications are formally authorised by the Starlink Project Scientist.

User support

Users can get help from many places:

The primary focus of user support is the Site Manager. Informal contacts with colleagues can also resolve many problems.

Overall policy on how a site is run is overseen by the Site Chairman, in consultation with its users, with the Site Manager, and with Starlink management. The aim is to operate each site in accordance with the needs and wishes of its users. The exact nature of local support organisations can vary, but they usually involve a Starlink Local User Group (SLUG) which all users may attend and participate in. The Site Chairman has a direct input to Starlink to report difficulties and to recommend policy changes.

Starlink welcomes input from users. They can contact the sources of support mentioned above. Other communication routes are Software Strategy Groups, Software Questionnaires, and the Starlink meeting at the annual National Astronomy Meetings. There is also an established complaints procedure.

Information sources

Extensive documentation is available on the Project itself, and on individual software items. Many of the larger packages have on-line help systems. The information sources are described in detail in the Starlink User's Guide (SUG), but the main source is the set of Starlink User Notes (SUN), Starlink Guides (SG), and Starlink Cookbooks (SC). There are over 180 of these Notes and Guides (about 6000 pages), and they are extensively indexed. Also, a newsletter called the Starlink Bulletin is published twice a year and circulated to all users to keep them in touch with what is happening in the Project.

A large amount of information about Starlink is also provided on the World Wide Web, including most of the Notes and Guides. Most sites have their own home pages, but the central one is maintained by the Project node and its URL is:

http://star-www.rl.ac.uk/

From here you can branch to all the other sites which provide web pages.

Software

The major software product provided by Starlink for its users is the Starlink Software Collection (SSC). It is managed and distributed by the Starlink Software Librarian at RAL. It is installed at every Starlink site, and has also been distributed to many other sites all over the world.

The SSC comprises about 120 items (40 packages, 40 utilities, 40 subroutine libraries and infrastructure). It is being actively developed and there are about 40 software releases per year. This means that published information like this gets out-of-date rapidly, so it is worth checking with the Project on the latest information - the World Wide Web and the Starlink Bulletin are good sources of current information. A summary of the current software available is given in Appendix B. The best survey of the SSC as a whole is SUN/1.

Starlink software originates from many different sources and has different levels of support. Some is a legacy from Starlink's past. Some is being actively developed and supported by Starlink's own programmers. Some (like A PS, IDL, IRAF) has been obtained from outside the Project. Starlink software development is controlled by the Starlink Panel, Software Strategy Groups, and the Project Group at RAL.

Software Environments

An important software issue is the software environment. This term covers: programming languages; job control; command languages; data systems; graphics; documentation aids; utilities; error handling. It is important both for reasons of software development efficiency, and because of its central role in the organisation of Starlink software.

Environment designs lie on a continuum between two extreme positions.

At one extreme, the facilities provided by the computer manufacturer are used, together with a collection of ad hoc routines. The dangers of this are: huge monolithic programs offering facilities of limited flexibility; a multiplicity of systems which are idiosyncratic to user and programmer; fixed, inflexible data formats; incompatibility between different systems; duplication.

At the other extreme, an ideal system is created which is machine, operating system, and programming language independent. The dangers of this are: might not be what the users want or need; takes a long time to develop; takes application programming effort; inefficient and slow.

The first approach tends to be favoured by users innocent of the real cost of software, who just want to get on with the job of analysing data as quickly as possible. The second approach tends to be favoured by computer scientists, who are more interested in a clever system than one that is useful for astronomers.

Starlink, with the approval of the relevant UK astronomical committees, adopted a middle course based on the ADAM environment. This provides application programmers with a uniform and powerful system of data storage, user interface, programming tools, job control, command language, and graphics. It is described in SG/4.

Hardware

Each Starlink site has a computer system based on Unix workstations and servers. At present these are based on DEC Alpha/Digital Unix and Sun SPARC/Solaris architectures, although other systems may be adopted in the future (for example, Linux on PCs). A residual VAX/VMS service is maintained at the Project Node to run legacy software.

In addition to Unix CPUs, the Project also supplies X-terminals, SCSI disks, Exabyte and DAT tape drives (for data exchange, backups, etc.), printers (mostly PostScript, and including some colour devices), networking equipment and other more specialized pieces of hardware.

A central computing facility is also provide at the Project Node to give all Starlink users access to some expensive commercial software and large databases.

Hardware enhancements are decided after annual bids to the Starlink Panel. Hardware funded from other sources, e.g. grants, may be brought under the Starlink umbrella. If this is agreed, Starlink takes over the running costs and management.

Organisation

Starlink is run on behalf of the Particle Physics and Astronomy Research Council (PPARC) by the Starlink Project Group within the Space Science Department (SSD) of Rutherford Appleton Laboratory (RAL). It has an annual budget of about 2 million pounds.

RAL is part of the Council for the Central Laboratory of the Research Councils (CCLRC).

PPARC is one of the Research Councils funded by the Office of Science and Technology (OST) within the Department of Trade and Industry (DTI).

Starlink Panel

This is a PPARC committee which:

It consists of a Chairman and seven others. They are university staff with interests covering a wide range of computing and astronomy. It normally meets three times a year, in March, July, and November.

Starlink Project Group

This is located at RAL and is the focus of day-to-day Project management. It:

Software Strategy Groups

These groups are one of the main sources of advice for Starlink on the development of its software. They cover the following areas:

History

In the 1970's it became clear that the data processing facilities available at that time to UK astronomers were inadequate to deal with the anticipated flood of data in digital form which would be generated by new data acquisition techniques.

In April 1978, the Science Research Council (which became the SERC, and later PPARC) set up a Panel on Astronomical Image and Data Processing under the chairmanship of Professor M J Disney to ascertain the computing needs of UK astronomers for the next 5 to 7 years. This Panel reported in April 1979 and recommended the installation of 6 super-minicomputers connected together in a star network by leased lines; hence the name Starlink.

The computer chosen was the DEC VAX 11/780. Between December 1979 and July 1980, these were installed at Cambridge, Manchester, RAL, RGO, ROE, and UCL. Astronomers at other sites used Starlink facilities via network links. The initial Project staff were appointed by mid 1980 with the administrative centre located at RAL. Starlink was inaugurated on 24th October 1980 by Mr D N MacFarlane, Parliamentary Under Secretary at the Department of Education and Science.

Starlink was the first astronomical data processing system to use networking extensively. The early links ran at 4800 baud and used DEC protocols exclusively.

A Scientific Advisory Group (SAG) and later a Starlink Users' Committee (SUC) advised RAL management on the running of the Project. There were also Area Management Committees (AMCs) for groups of sites geographically close to each other. Special Interest Groups (SIGs) considered software development in specific application areas. A Hardware Advisory Group (HAG) advised the Project on hardware purchase.

Sites and Users

Since Starlink began in 1980 the number of Starlink sites has grown from 6 to 28 and the number of users has grown from about 200 to about 2000. Details are given in Appendix B. This reflects an increase in the use of computers by astronomers, and a move by university groups to transfer from non-Starlink to Starlink systems. In 1990 the RGO node merged with the Cambridge node when RGO moved from Sussex to Cambridge, and the RAL node split into the Project node and the Astrophysics node.

Hardware

Right from the outset, Starlink has used equipment from a variety of suppliers to maximize cost-effectiveness. For example, the original six VAX 780s were equipped with disks from System Industries and printers from Printronix and Versatec.

As new Starlink sites were established, they were set up with VAX 750s and, subsequently, Micro VAXes. From 1988 onwards, Micro VAXes were first added to VAX 780 and 750 sites to create VAX clusters, and then the 780s and 750s themselves were replaced by Micro VAXes. The last Starlink 780 was removed from Herstmonceux in early 1990, when RGO moved to Cambridge.

By 1990, Starlink's hardware comprised clusters of MicroVAXes and VAXstations, and workstation screens had replaced the earlier specialized image display devices, such as the Sigma ARGS. By basing our software on VMS alone since Starlink was established we had economised on software costs, but Unix offered two major benefits:

It was decided, therefore, to move from VMS to Unix, and in 1992 the Willmore Review of Starlink recommended that the move should be completed by April 1995. Starlink's first Unix hardware was a DECstation 2100 running Ultrix, purchased by the Project in late 1989 to investigate software porting questions.

The move to Unix took several years, but was completed by the April 1995 target. During the move, Starlink supported up to five platforms in parallel- VAX/VMS, SPARC/SunOS, SPARC/Solaris, DECstation/Ultrix, Alpha/OSF (Digital Unix). Since completing the move we have rationalized support, concentrating on our two main platforms: SPARC/Solaris and Alpha/Digital Unix. Nothing stands still, however, and Starlink has recently started to offer support for a third platform: PCs running Linux.

Software

Software development plans were first discussed at a workshop held at Appleton Laboratory in November 1979 and the following recommendations were made:

The first software release occurred in March 1980.

The first Starlink software environment (INTERIM), first released in September 1980, was a data and parameter system towards the pragmatic end of the spectrum. It also had a fairly primitive command language (DSCL). It was tolerably efficient, very easy to use, and was available within 9 months of the start of the Project. It served as the basis for a lot of application software which was widely used.

The second Starlink software environment (SSE) was an ambitious concept residing near the utopian end of the spectrum, and very similar to the successful IRAF system which appeared some years later. It was developed for 5 years but was never considered satisfactory and was inadequately documented. It was 44 times bigger than INTERIM (even though incomplete) and command processing was unacceptably slow on the equipment of the time. It contained some powerful and promising packages, in particular a good graphics system based on GKS and a Hierarchical Data System (HDS) of considerable elegance, power, and efficiency. Its failure was due to its development being badly affected by catastrophic losses of key staff and, more generally, by lack of central programming effort in a Project whose customer base was expanding rapidly. This serious situation was considered at two meetings at RAL in November 1985 and February 1986.

The result was that a new software environment (ADAM) was adopted for development. This did not represent a fresh start in the way that SSE was a radically different design from INTERIM. In fact it derived from many sources taken from many different places. It owed a lot to SSE, and in some respects was a re-implementation of it with a greater emphasis on efficiency. The initial focus for this work was the production of a real-time telescope and instrument control system at ROE and RGO, but this was now broadened to encompass Starlink's data-analysis requirements. The initial release was in September 1986, and it has since been developed extensively.

All Starlink software development in the eighties was based on VMS. A major challenge and opportunity in the nineties was presented by the arrival of powerful Unix-based workstations. It was necessary to port selected Starlink software to these machines; in particular, a big effort was needed to port the ADAM environment. In the event, this was achieved comfortably before the April 1995 deadline for the move to Unix. Software development is now concentrated on Unix systems.

Questionnaires

In the spring of 1986, a major exercise was carried out to determine what the users thought should be the future direction of the Starlink Project. A 14-page questionnaire was distributed to 669 users and potential users and 394 were returned, a very high proportion. The returns were analysed and the results influenced the direction of the Project.

Another extensive questionnaire, concerned specifically with Starlink software, was distributed to over 1600 users in January 1994. Over 500 were returned and the results heavily influenced Starlink's software development plans.

A further software questionnaire was distributed to nearly 2000 users in March 1996. Once again, over 500 were returned and the results of their analysis will have a similar influence on the Project.

What have we learnt?

The Starlink Project has been going for sixteen years. What have we learnt during this time?

The VAX computer was an excellent initial choice. It provided a powerful, well documented, user-friendly operating system. For some time it was the standard hardware for mainstream, worldwide astronomical computing. Developments, such as micro VAXes and VAX clusters, provided a natural growth path for the Project. The choice of VAX/VMS and UK-wide networking made Starlink a success, in spite of many problems. The arrival of powerful Unix-based scientific workstations changed the situation, and we completed a move from VMS to Unix by early 1995.

Compatible hardware at each site allows software to be distributed easily and avoids wasteful development of different versions for different hardware. Central purchase and maintenance of this hardware has allowed Starlink to negotiate extra discounts from suppliers and so reduce costs.

The network has been vital. It enables a basic set of software to be controlled centrally and distributed rapidly, so that a common user environment exists at every site. The electronic mail facilities are heavily used and extremely valuable. They bind the astronomical community together.

Our first software environment (INTERIM) was modest but successful. The second (SSE) proved to be over-ambitious in relation to our limited resources, and on the computers of the time was just too slow for users to tolerate. The third (ADAM) seems to have been a good move forward. Perhaps the biggest lesson is the importance of having a strong, central programming team under firm control when embarking on this kind of project. After prolonged efforts by Starlink management, this situation was achieved by the creation at RAL of an Infrastructure support group.

Once a large collection of software has been distributed, the maintenance and support requirements remain considerable. This is not understood by some astronomers, and is one of Starlink's most intractable problems.

Central management of the preparation and distribution of software releases by a Software Librarian is vital to the cohesiveness of the Project. A properly integrated set of software, documentation and administrative information installed at multiple sites is difficult to achieve and has to be well supported. A related problem is the dispersion of different versions of specific software items within different packages. This leads to waste of storage space and difficulties with updates. The solution lies in central co-ordination of software development and the use of programming support environments.

Acknowledgements

I would like to thank the following people for their significant contributions to this document: Chris Clayton, Alan Penny, Andrea Roberts, John Sherman, Dave Terrett, Patrick Wallace.

Appendix A: Glossary

AMC Area Management Committee
CCLRC Council for the Central Laboratory of the Research Councils
CPU Central Processor Unit
DEC Digital Equipment Corporation
DfE Department for Education
DTI Department of Trade and Industry
HAG Hardware Advisory Group
HEI Higher Educational Institute
HST Hubble Space Telescope
IoA Institute of Astronomy, Cambridge
MRAO Mullard Radio Astronomy Observatory
OST Office of Science and Technology
PC Personal Computer (IBM compatible)
PPARC Particle Physics & Astronomy Research Council
RAL Rutherford Appleton Laboratory
RGO Royal Greenwich Observatory
ROE Royal Observatory Edinburgh
RUG Remote User Group
SAG Scientific Advisory Group
SC Starlink Cookbook
SCSI Simple Computer System Interface
SERC Science and Engineering Research Council
SG Starlink Guide
SIG Special Interest Group
SLUG Starlink Local User Group
SSC Starlink Software Collection
SSD Space Science Department (RAL)
SSE Starlink Software Environment
SUC Starlink User' Committee
SUG Starlink User's Guide
SUN Starlink User Note
UCL University College London
UK United Kingdom
URL Uniform Resource Locator
VAX Virtual Address Extension (DEC)
VMS Virtual Memory System (DEC)

APPENDIX B: Detailed information

Incomplete

The pages that follow contain charts and tables that give detailed information about Starlink. These are described below in order of appearance. The information was the latest available at the time of publication.

  1. Shows the number of users at each User-Centre. The Prim block shows the number of Primary users at a site. The Sec block shows the number of Secondary users at a site.
  2. Shows the proportions of different types of user (specified on page 6).
  3. Shows the growth in the total number of Starlink users since the Project began in 1980.
  4. Shows the growth in the number of User-Centres since the Project began. See page 4 for a definition of User-Centre.
  5. Lists the names of the items in the Starlink Software Collection (SSC), classified into functional area.
  6. Lists the postal address of every Starlink User-Centre. The RAL/ Astrophysics User-Centre and the Project User-Centre are both located at RAL and have the same postal address. Thus, only 27 addresses are given in the list, although there are 28 Starlink User-Centres
⇑ Top of page
© Chilton Computing and UKRI Science and Technology Facilities Council webmaster@chilton-computing.org.uk
Our thanks to UKRI Science and Technology Facilities Council for hosting this site