SIGGRAPH is the worlds premier annual conference for computer graphics and interactive techniques and this year was held in San Diego at the end of July. It brings together researchers, practitioners, developers, industry big-wigs, artists, and anyone else involved in graphics and related fields such as gaming, film, and animation. SIGGRAPH is somewhat unique in that it goes far beyond the rather staid nature of many academic conferences and enthusiastically embraces all aspects of the field. So, in addition to highly technical presentations of leading-edge research work, we also get the guys from Pixar talking about how they achieved the amazing effects in the forthcoming Finding Nemo film; game developers discussing the motivations behind their newest titles; an Electronic Theater night showing selected computer-animated short films; and numerous other diversions.

Arriving in San Diego I quickly realised that the biggest problem for a SIGGRAPH virgin like myself is caused by the sheer scale of the thing. Over 10,000 people usually turn up and any conference centre wishing to host it has to be able to provide over 1 million square feet of exhibition space. A quick look at the program shows that at any one time there are at least 10 things going on and for the conscientious attendee this leads to agonising choices and a timetabling problem of horrific proportions. Shall I go to a paper session about character animation? Or a course on real-time shading? Or will I go and visit the exhibition hall to see demos of nVidia’s newest graphics accelerators? Or join the Web Graphics series of discussions? Or just say “sod it” and go and lie on the beach?

The conference is divided into a number of different types of activities. The heart of SIGGRAPH is the paper sessions. For almost 30 years now, researchers have been presenting the newest computer graphics techniques in this forum. The selection process is extremely rigorous. This year there were 424 submissions and 81 were accepted. Successful researchers get to present their work in front of an audience of well over a thousand people. If you can imagine giving a talk in the Point Theatre then you are beginning to get the idea. Papers this year covered topics such as texture synthesis, animation of smoke and explosions, human animation, algorithms for GPUs, motion capture and shape from data. Carol O’Sullivan of Trinity College’s Image Synthesis Group presented a paper as part of a session on “Perception and Manipulation” which is, as far as I know, the first SIGGRAPH paper ever from Ireland.

In addition to the paper sessions we also get a series of courses which are of half-day or full-day duration. These are given by experts in their field and aim to bring people quickly up-to-speed on a given area of research. There were numerous courses to choose from this year. A highlight was the one on Dynamics by David Baraff and colleagues. Baraff practically invented the idea of driving 3D graphics using physics simulation. He now works for Pixar and hence was able to illustrate his talks with examples of how these concepts were applied in films such as Monsters Inc. and Finding Nemo. His colleague Andrew Witkin later presented a paper called “Untangling Cloth” which dealt with solutions to cloth animation problems they ran into when working on the Boo character in Monsters Inc.

When one’s brain reaches information overload it is easy to switch over to some of the other SIGGRAPH attractions. The Exhibition Hall is a showcase for the industry to parade their latest wares and everyone involved in graphics gets in on the act. So we had Discreet showing off 3DS Max 6, nVidia with their new range of graphics cards, and a bewildering array of hardware products from haptic input devices to VR equipment and 3D scanners. More stimulating fare was to be found in the Emerging Technologies room. Here participants were able to try out exotic new technologies and quiz their creators. Want to see a chroma-keying system that segments humans from video in real time using thermal information? Or a projector that uses a wall of fog as a screen? Or an interface that translates human body movements into 3D paintings? If you get bored with all this then there is always the Electronic Art Gallery, the Guerilla Studio, or the Computer Animation Festival.

Overall themes are difficult to identify from such a wide-ranging programme but there’s no doubt that programmable graphics hardware continues to have a huge impact. The introduction of programmable graphics cards (commonly known as GPUs) have allowed developers to implement algorithms that run lightning-fast on the graphics processor and make possible complex shading, shadowing and texturing in real time. The latest generation of First Person Shooter games have started to exploit this with glee and at SIGGRAPH, researchers demonstrated more spectacular things possible with this technology, now and in the future. Peter Pike-Sloan of Microsoft Research presented pioneering work on carrying out lighting calculations on GPUs that take advantage of pre-computed radiance and deliver stunning real-time rendering of glossy objects with effects such as dynamic self-shadowing and inter-reflection. Other researchers talked about how to implement techniques such as collision detection and texture synthesis in this way, and there was also a heavy emphasis on the emerging languages for programming the graphics hardware – such as nVidia’s CG language and the new OpenGL Shading Language which is part of the OpenGL 2.0 standard.

Almost everything at SIGGRAPH is of potential interest to game developers and several special sessions specifically addressed this audience. In “Behind The Game: Deconstructing the Successes of 2002”, the developers of Neverwinter Nights, Sly Cooper and the Thievius Racoonus, and Splinter Cell, talked about the approaches, motivations and methodologies behind the production of these hits. Early in the week a fascinating course on Game Art was delivered. This dealt with the increasing use of game mods such as those supplied with Half-Life and Unreal as a means of building interactive 3D worlds which are intended as art pieces rather than as games. The most controversial of these is “911 Survivor” which aims to simulate the experience of victims trapped in the Twin Towers. On a related theme, Velvet Strike allows the user to download anti-war graffiti tags which can then be sprayed onto the walls of online Counter-Strike worlds.

The item with the biggest “Wow!” factor at SIGGRAPH, for this attendee at least, was the High Dynamic Range Display, which was exhibited in Emerging Technologies. Modern graphics hardware and software have a very limited selection of colours and, more importantly, brightness values at its disposal. The range of visible light in the real world is vastly higher than this and our visual system has elaborate means of coping with this (e.g. squinting when we leave a dark cinema on a bright day). High dynamic range graphics have been knocking around in the research world for a few years now. The idea here is that we capture real world lighting (in its full range) and use this to render objects on the screen. Up until now the results still have to squashed into the RGB range that the monitor is capable of displaying. However, the High Dynamic Range Display System developed by Sunnybrook Technologies is capable of coping with a massive dynamic range comparable to that existing in the real world. Or to put it simply, the bright bits are way brighter and the dark bits blacker than black. Images that are rendered with captured high-dynamic range lighting look truly amazing on this display and come significantly close to the manufacturer’s vision of creating digital display systems that can “present images that are visually indistinguishable to the real setting they portray”. I want one, now.

SIGGRAPH is a fantastic conference that more than lives up to its billing as the greatest graphics show on earth. I recommend anyone with an interest in graphics, gaming or related technologies to try and make the trip. And in a message to my employers – I didn’t really bunk off and go to the beach. Honestly.


Author Bio: Hugh McCabe is a lecturer at the Institute of Technology Blanchardstown. He is a founding member of the Graphics and Gaming Research Group. See the resources section of this website for more information on projects in that group in 2D, AI and audio at:


Graphics and Gaming Group at ITB: learningandinnovation/ggg/
ISG, Trinity College: http://isg.cs.tcd.ie
Peter-Pike Sloan:
cg language: object/cg.htmlobject/cg.html
OpenGL shading language:
911 Survivor:
Velvet Strike: velvet-strike/velvet-strike/
High Dynamic Range Displays: