\centerline{\bf Landmarks in Electronic Publishing} \medskip \noindent This two-day conference. held at the University of Durham 13--14 April, was a new venture for the BCS Electronic Publishing Specialist Group. In the five years since the Group was formed, with the exception of EP86 in Nottingham, which was not strictly a Group conference, all meetings have been in London and have lasted one day or less. This conference `Landmarks in Electronic Publishing' was not only a longer meeting, but a residential one with more time for people to talk amongst themselves. In this sense it was extremely valuable, quite apart from the formal lectures. The first speaker was Glenn Reid of Adobe Systems, giving a `Perspective on the \PS{} Language'. He began by giving a brief history of Adobe and describing the page-description languages out of which \PS{} grew. Then he summarized the principal features of \PS{}, that it is an interpreted language, purely in seven-bit ASCII (and therefore humanly readable) with a low execution overhead. It is stack-based, all data types are represented by single objects, programs can be arbitrarily complex and there is no buffering. Primitives are redefinable and there is late binding of names, making a very flexible language. Glenn Reid then went on to expand on the typographical features of \PS{}, outline fonts with rotation, sizing (although no hints on hints!), kerning, ligatures etc, and the graphics features, now increasingly familiar to all in the field. Finally Glenn talked about current developments: music, Kanji and vertical setting, and Display \PS{}. In summary, a useful perspective on \PS{} as it is today, but no real indication that Adobe are likely to `go public' and make even a full specification of \PS{} generally available. Glenn Reid was followed by William Roberts of Queen Mary College, London, on `\PS{} is not for Publishers'. My initial reaction was that perhaps this should have been `printers' rather than `publishers', but the main drift of William Roberts's talk was that there is little point in submitting books, journal papers etc to publishers in \PS{} form because publishers' editors cannot then edit the files. This is a viewpoint that only those authors who wish to impose thier own style on publishers will dissent from and certainly publishers should be supplied with source files. William Roberts's second point was that, good as \PS{} is, its implementation by many programmers in standard packages leaves a lot to be desired and writers of DTP packages should produce much better \PS{}. A rather different view of the \PS{} culture was put by Richard Patterson of Hyphen Editorial Systems, who spoke on `Cloning \PS{}'. His first comment was `don't do it', but he went on to explain why, and to a much lesser extent, how, Hyphen had done it. The Hyphen clone is aimed at the typesetting (high-resolution) market; Richard Patterson gave an interesting summary of the developments in computer graphics during the sixties and seventies (in which the printing industry took no interest whatsoever), but which John Warnock of Adobe built upon to create \PS{}. By chance, Linotype happened to fix on \PS{} for their Linotronic typesetters and found themselves leading the market. Suddenly the printing industry, having had its head in the sand for several years, was all for \PS{}. However, a faster interpreter was really needed, particularly for newspapers, and therefore Hyphen's clone filled a market niche. Adobe `hints' are really only important for low-resolution devices and therefore, as Hyphen were aiming at high-resolution devices, it was not necessary to take account of these. Finally the speaker made a plea for a standard product. He accepted that Adobe had a firm hold on the market, but felt that they should therefore take responsibility for \PS{} as a language, much in the way that AT\&T had done for UNIX. The Adobe books give a specification (although Adobe may change this at any time), but not a method of implementation. Alan Francis of Page description compared \PS{} and the Graphics Metafile (CGM). CGM has been developed as an interchange format over several years and it was interesting to see a detailed comparison with \PS{}. It was obvious that both have a place and although in many ways \PS{} is more powerful, CGM is much more concise and therefore better for transferring between systems and for storing. Because of its more flexible typography \PS{} is the answer for controlling output devices. The final speaker on the first day was James Gosling of Sun Microsystems on the NeWS Window System. This is not intended to be a \PS{} clone, but carries out all the functions of one so that \PS{} files can viewed on the screen. It was also developed with networking in mind so that relatively small \PS{} files could be transferred rather than large graphics files (Pixels are bad for you!). Unlike \PS{} however, or rather as an extension of \PS{}, NeWS is object oriented and screen attributes such as buttons, menus etc are defined as a hierarchy. James Gosling went on to discuss the merits or otherwise of true \wysiwyg\ and the problems of font hints at screen resolutions (80\,dpi). He explained how Sun approached the problem, although he emphasized that this did not necessarily mean that Adobe approached it in the same way. His explanation provoked some lively comment from the audience. At the end of the talks there was a panel session where members of the audience were able to ask specific questions, but also to build on what had been said through the day. At last someone managed to hint what `hints' are! The \PS{} fonts are encoded as outline information: this enables them to be reasonably concise, besides making it child's play to rotate, shear, or otherwise distort the coordinate system in which they are based. But, while this is tolerable on high resolution devices (say over 700-1000\,dpi), on low resolution devices, like laser printers, it leads characters of rather poorer quality. Basically, typefaces just don't work like that, and even the odd pixel here or there can lead to the character on the page diverging from its ideal. That's where hints come in (they used to be called `Adobe font magic', but `hints' sounds more professional). They try to make allowances for the lower resolution. They also make allowances for the fact that correct scaling of a typeface is distinctly non-linear. A 10 point Times New Roman (as designed by Morison), is not merely magnified to obtain a 20\,pt version. All sorts of subtle changes (should) come into play. The hints try to effect these changes. The only complaint one could have about the discussion period was that it was too short. Just as it got interesting, food and drink intervened. The second day was begun with the election of the Group's Chairman and committee. David Harris was elected to take David Brailsford's place on the committee, and three new members were elected to serve on the committee: Paul Bacsich of the OU, Maria Tuck of the Independent, and Cathy Booth of Exeter University. As will by now be clear, the emphasis on the first day was on \PS{}, while that on the second day was more general, although covering implementation of integrated systems. The first speaker was William Newman of Xerox EuroPARC. His topic was `Document Representations: Designing for Designers' and was really a summary of work carried out at PARC (Palo Alto Research Center) during the 1970's. It was fascinating for two reasons; firstly the emphasis on design criteria is one which is often overlooked, while secondly, and in many ways sadly, it is astonishing how far Xerox had reached over ten years ago and how little of that achievement ever reached the market place, at least directly. Perhaps the most telling point in William Newman's talk was when he said that he really did not see why it was necessary to wait for a page to come out of a laserprinter; he didn't have to in 1974 --- so much for progress! Nonetheless, it is almost certain that had the PARC work not taken place, many of today's products would not exist. A final look into the future covered areas such as multimedia documents and the use of audio and video as part of information systems, areas where no-one has a great deal of experience. Paul Bacsich described the implementation and integration of the EP system at the Open University. And integration is the important part of the implementation. The solution has been to use Ethernet to link Suns, Vaxes and Apple Mac clusters (themselves communicating via Appletalk and connected via a Kinetics box). Software intgration has been at least partially achieved by the use of Microsoft Word and its Rich Text Format. Problems still exist, which is not surprising in view of the aim to incorporate into the EP integration the setting of maths, chemistry and music, as well as non-Latin texts, and to try and have these handled in as consistent a way as possible. It was an intriguing view of the problems involved in such an operation and it will be interesting to see how the situation develops. (Editorial note: the September meeting will be at the Open University, so you can go and see for yourself.) The establishment of `The Independent' has been interesting, not only from a sociological point of view, but also in the technical context of Eddie Shah's ventures. Chris Hugh-Jones gave a very clear explanation of how the paper functions (when compared to a more conventional operation), together with a description of how the paper came into being. An Atex system is used, just as by a number of other papers, but everything is only keyed once. Journalists, Sub-Editors, Editors etc all have access to the files and in this way the paper can be designed on the screen with no delay in output once a page has been finalized. Composed pages are then faxed to different parts of the country for printing and distribution. Chris Hugh-Jones also talked about the Magazine, produced on Saturdays. The text is handled using Quark Xpress (which produces one or two typographical problems, for example in hyphenation) and then sent to the Scitex colour graphics work stations for merging with the graphics, which have been scanned in. Finally, Chris Hugh-Jones discussed the future. The Independent may well be ahead of the rest of the national press in its production techniques, but the suppliers to the newspaper industry are still some way behind those to the graphic arts industry. Cathy Booth of the University of Exeter talked about teaching electronic document preparation as part of a modular degree course. One of the problems is that students start with such varying levels of knowledge and competence in computing. However, the use of two types of program provides the students with an insight into what is better for different types of publication. She chose the markup system \PCTeX, and the contrasting direct manipulation system, Pagemaker. The use of videos and projects helped considerably. Nonetheless, the biggest problem was the lack of resources, at least in relative terms as this kind of course is one which ideally requires as much hands-on experience as possible. The biggest triumph was the quality of some of the students' projects. The final speaker was Ted Johnson, who is an Aldus Fellow, which means that he has a free role to investigate directions in which Aldus is likely to move. He reviewed Aldus's place in the market and their future options. He described four areas (or directions), technical publishing, word processing, interactive drawing, and creative flexibility, into which Aldus could move, although they are to a certain extent exclusive. Possible solutions rejected have been (a) to abandon the market and (b) to provide everything for everyone. This leaves splitting production lines as a short-term option, e.g.~the development of Freehand as a separate product from Pagemaker. A more long term development is object-oriented implementation, so that object classes can be tested independently of the specific application, with the aim of producing zero-defect software. Another benefit would be cross-product object sharing, so that once defects are fixed for one product they are fixed for all. A final implication would be that object subclasses could be created for different market segments with customization by software vendors, VAR's, systems integrators, corporate information services departments, or even the sophisticated user. Essentially then the user would either buy a single customized product or a library from which he can put together his own product. To sum up the conference is difficult. The range of people there was broad, from academia, through the commercial publishing world to the corporate. The speakers, taken together, provided an interesting overview of how EP has developed over the last few years and how it is likely to develop. There were no announcements of great break-throughs, but that was hardly to be expected. Perhaps the most important aspect of the meeting was the time spent informally (and the social arrangements for the conference were excellent, for which the local organiser, Roger Gawley, must take the major responsibility) and the opportunity for discussion. As always, there were many familiar faces, but this time there were a significant number of not so familiar faces, and the opportunity for discussions with different people, or even with the same people in a different place, meant that the general view was that this sort of meeting should happen again. \smallskip \rightline{\sl David W Penfold}