Advances in
real-time graphics research and the increasing power of mainstream GPUs has
generated an explosion of innovative algorithms suitable for rendering complex
virtual worlds at interactive rates. This course will focus on recent
innovations in real-time rendering algorithms used in shipping commercial games
and high end graphics demos. Many of these techniques
are derived from academic work which has been presented at SIGGRAPH in the past
and we seek to give back to the SIGGRAPH community by sharing what we have
learned while deploying advanced real-time rendering techniques into the
mainstream marketplace.
Topics
Examples of practical real-time solutions
to complex rendering problems:
●
Increasing apparent
detail in interactive environments
●
Inverse
displacement mapping on the GPU with parallax occlusion mapping
●
Out-of-core
rendering of large datasets
●
Environmental effects
such as volumetric clouds and rain
●
Translucent
biological materials
●
Single scattering
illumination and approximations to global illumination
●
High dynamic range
rendering and post-processing effects in game engines
This course
is intended for graphics researchers, game developers and technical directors.
Thorough knowledge of 3D image synthesis, computer graphics illumination
models, the DirectX and OpenGL API Interface and high level
shading languages and C/C++ programming are assumed.
Technical
practitioners and developers of graphics engines for visualization, games, or
effects rendering who are interested in interactive rendering.
Welcome and Introduction
Natalya Tatarchuk (ATI Research)
Out-of-Core
Rendering of Large Meshes with Progressive Buffers
Pedro
V. Sander (ATI Research)
Animated
Skybox Rendering and Lighting Techniques
Pedro V. Sander (ATI Research)
Artist-Directable Real-Time Rain Rendering in City Environments
Natalya Tatarchuk (ATI Research)
Rendering
Gooey Materials with Multiple Layers
Chris
Oat (ATI Research)
Parallax
Occlusion Mapping for Detailed Surface Rendering
Natalya Tatarchuk (ATI Research)
Real-time
Atmospheric Effects in Games
Carsten Wenzel (Crytek Gmbh)
Shading in
Valve’s Source Engine
Jason L. Mitchell (Valve)
Ambient
Aperture Lighting
Chris Oat (ATI Research)
Fast Approximations for Global
Illumination on Dynamic Scenes
Alex Evans (Bluespoon)
Course
Organizer
Natalya Tatarchuk is a staff research engineer in the demo group of ATI's 3D Application
Research Group, where she likes to push the GPU boundaries investigating
innovative graphics techniques and creating striking interactive renderings.
Her recent achievements include leading the creation of the state-of-the-art
realistic rendering of city environments in ATI demo “ToyShop”.
In the past she has been the lead for the tools group at ATI Research. She has
published articles in technical book series such as ShaderX
and Game Programming Gems, and has presented talks at Siggraph and at Game Developers Conferences worldwide.
Natalya holds Bachelor’s in Computers Science and Mathematics from Boston
University and is currently pursuing a graduate degree in CS with a
concentration in Graphics at Harvard University.
A note from
the organizer
Welcome to
the Advanced Real-Time Rendering in 3D Graphics and Games course at SIGGRAPH
2006. We’ve included both 3D Graphics and Games in our
course title in order to emphasize the incredible relationship that is quickly
growing between the graphics research and the game development communities.
Although in the past interactive rendering was synonymous with gross
approximations and assumptions, often resulting in simplistic visual rendering,
with the amazing evolution of the processing power of consumer-grade GPUs, the
gap between offline and real-time rendering is rapidly shrinking. Real-time
domain is now at the forefront of state-of-the-art graphics research – and who would
not want the pleasure of instant visual feedback?
As
researchers, we focus on pushing the boundaries with innovative computer
graphics theories and algorithms. As game developers, we bend the existing
software APIs such as DirectX and OpenGL and the available hardware to perform
our whims at highly interactive rates. And as graphics enthusiasts we all
strive to produce stunning images which can change in a blink of an eye and let
us interact with them. It is this synergy between researchers and game
developers that is driving the frontiers of interactive rendering to create
truly rich, immersive environments. There is no greater satisfaction for
developers than to share the lessons learned and to see our technologies used
in ways never imagined.
This is the
first time this course is presented at SIGGRAPH and we
hope that you enjoy this year’s material and come away with a new understanding
of what is possible without sacrificing interactivity! We hope that we will
inspire you to drive real-time rendering research and games!
Natalya
Tatarchuk, ATI Research, Inc.
April 2006
Introduction and Welcome, N. Tatarchuk (Introduction Chapter PDF, Introduction Slides PDF)
Abstract: We introduce a
view-dependent level of detail rendering system designed with modern GPU
architectures in mind. Our approach keeps the data in static buffers and geomorphs between different LODs using per-vertex weights
for seamless transition. Our method is the first out-of-core system to support
texture mapping, including a mechanism for texture LOD. This approach
completely avoids LOD pops and boundary cracks while gracefully adapting to a
specified frame rate or level of detail. Our method is suitable for all classes
of GPUs that provide basic vertex shader programmability and is applicable for
both out-of-core or instanced geometry. The contributions of our work include a
preprocessing and rendering system for view-dependent LOD rendering by geomorphing static buffers using per-vertex weights, a
vertex buffer tree to minimize the number of API draw calls when rendering
coarse-level geometry, and automatic methods for efficient, transparent LOD
control.
Speaker
Bio:
Pedro V. Sander is a member of the 3D
Application Research Group of ATI Research. He received his Bachelor's
degree from Stony Brook University, and his Masters and PhD in Computer Science
at Harvard University. Dr. Sander has done research in geometric modeling, more
specifically efficient rendering techniques and mesh parameterization for high
quality texture mapping. At ATI, he is researching real-time rendering methods
using current and next generation graphics hardware.
Materials:
Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: In this presentation we
briefly describe techniques used to represent and render the high dynamic range
(HDR) time-lapse sky imagery in the real-time Parthenon demo (Figure 1). These
methods, along with several other rendering techniques, achieve real-time frame-rates using the latest generation of graphics
hardware.
Speaker
Bio:
Pedro V. Sander is a member of the 3D
Application Research Group of ATI Research. He received his Bachelor's
degree from Stony Brook University, and his Masters and PhD in Computer Science
at Harvard University. Dr. Sander has done research in geometric modeling, more
specifically efficient rendering techniques and mesh parameterization for high
quality texture mapping. At ATI, he is researching real-time rendering methods
using current and next generation graphics hardware.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: In this talk we will cover
approaches for creating visually complex, rich interactive environments as a
case study of developing the world of ATI “ToyShop”
demo. We will discuss the constraints for developing large immersive worlds in real-time, and go over the considerations for developing
lighting environments for such scene rendering. Rain-specific effects in city
environments will be presented. We will overview the lightning system used to
create illumination from the lightning flashes, the high dynamic range
rendering techniques used, various approaches for rendering rain effects and
dynamic water simulation on the GPU. Methods for rendering reflections in
real-time will be illustrated. Additionally, a number of
specific material shaders for enhancing the feel of the rainy urban environment
will be examined.
Speaker
Bio:
Natalya Tatarchuk is a staff research
engineer in the demo group of ATI's 3D Application Research Group, where she
likes to push the GPU boundaries investigating innovative graphics techniques
and creating striking interactive renderings. Her recent achievements include
leading the creation of the state-of-the-art realistic rendering of city
environments in ATI demo “ToyShop”. In the past she
has been the lead for the tools group at ATI Research. She has published
articles in technical book series such as ShaderX and
Game Programming Gems, and has presented talks at Siggraph and at Game Developers Conferences worldwide.
Natalya holds BA's in Computers Science and
Mathematics from Boston University and is currently pursuing a graduate degree
in CS with a concentration in Graphics at Harvard University.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: An efficient method for
rendering semi-transparent, multi-layered materials is presented. This method
achieves the look of a volumetric material by exploiting several perceptual
cues, based on depth and illumination, while combining multiple material layers
on the surface of an otherwise non-volumetric, multi-textured surface such as
the human heart shown in Figure 1. Multiple implementation strategies are
suggested that allow for different trade-offs to be made between visual quality
and runtime performance.
Speaker
Bio:
Chris Oat is a senior software
engineer in the 3D Application Research Group at ATI where he explores novel
rendering techniques for real-time 3D graphics applications. As a member of
ATI's demo team, Chris focuses on shader development for current and future
graphics platforms. He has published several articles in the ShaderX and Game Programming Gems series and has presented
at game developer conferences around the world.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: This talk presents a per-pixel ray tracing
algorithm with dynamic lighting of surfaces in real-time on the GPU. First, we
will describe a method for increased precision of the critical ray-height field
intersection and adaptive height field sampling. We achieve higher quality
results than the existing inverse displacement mapping algorithms. Second, soft
shadows are computed by estimating light visibility for the displaced surfaces.
Third, we describe an adaptive level-of-detail system which uses the information
supplied by the graphics hardware during rendering to automatically manage
shader complexity. This LOD scheme maintains smooth transitions between the
full displacement computation and a simplified representation at a lower level
of detail without visual artifacts. Finally, algorithm limitations will be
discussed along with the practical considerations for integration into game
pipelines. Specific attention will be given to the art asset authoring,
providing guidelines, tips and concerns. The algorithm
performs well for animated objects and supports dynamic rendering of height
fields for a variety of interesting displacement effects. The presented method
is scalable for a range of consumer grade GPU products. It exhibits a low
memory footprint and can be easily integrated into existing art pipelines for
games and effects rendering.
Speaker
Bio:
Natalya Tatarchuk is a staff research
engineer in the demo group of ATI's 3D Application Research Group, where she
likes to push the GPU boundaries investigating innovative graphics techniques
and creating striking interactive renderings. Her recent achievements include
leading the creation of the state-of-the-art realistic rendering of city
environments in ATI demo “ToyShop”. In the past she
has been the lead for the tools group at ATI Research. She has published
articles in technical book series such as ShaderX and
Game Programming Gems, and has presented talks at Siggraph and at Game Developers Conferences worldwide.
Natalya holds BA's in Computers Science and
Mathematics from Boston University and is currently pursuing a graduate degree
in CS with a concentration in Graphics at Harvard University.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: Atmospheric effects, especially for outdoor
scenes in games and other interactive applications, have always been subject to
coarse approximations due to the computational expense inherent to their
mathematical complexity. However, the ever increasing
power of GPUs allows more sophisticated models to be implemented and rendered
in real-time. This chapter will demonstrate several ways how developers can
improve the level of realism and sense of immersion in their games and
applications. The work presented here heavily takes advantage of research done
by the graphics community in recent years and combines it with novel ideas
developed within Crytek to realize implementations that efficiently map onto
graphics hardware. In that context, integration issues into game production
engines will be part of the discussion.
Speaker
Bio:
Carsten Wenzel is a software engineer
and member of the R&D staff at Crytek. During the development of FAR CRY he was responsible for performance optimizations on the
CryEngine. Currently he is busy working on the next iteration of the engine to
keep pushing future PC and next-gen console technology. Prior to joining Crytek
he received his M.S. in Computer Science at Ilmenau,
University of Technology, Germany in early 2003. Recent contributions include
GDC(E) presentations on advanced D3D programming, AMD64 porting and optimization
opportunities as well articles in ShaderX 2.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: Starting with the release of
Half-Life 2 in November 2004, Valve
has been shipping games based upon its Source game engine. Other Valve titles
using this engine include Counter-Strike: Source, Lost Coast, Day of Defeat: Source and
the recent Half-Life 2: Episode 1. At
the time that Half-Life 2 shipped,
the key innovation of the Source engine’s rendering system was a novel world
lighting system called Radiosity Normal Mapping. This technique uses a novel
basis to economically combine the soft realistic lighting of radiosity with the
reusable high frequency detail provided by normal mapping. In
order for our characters to integrate naturally with our radiosity
normal mapped scenes, we used an irradiance volume to provide directional
ambient illumination in addition to a small number of local lights for our
characters. With Valve’s recent shift to episodic content development, we have
focused on incremental technology updates to the Source engine. For example, in
the fall of 2005, we shipped an additional free Half-Life 2 game level called Lost
Coast and the multiplayer game Day of
Defeat: Source. Both of these titles featured
real-time High Dynamic Range (HDR) rendering and the latter also showcased the
addition of real-time color correction to the engine. In this talk, we will
describe the unique aspects of Valve's shading techniques in detail.
Speaker
Bio:
Jason L. Mitchell is a software
developer at Valve Software, where he works on integrating cutting edge
graphics techniques into the popular Half-Life series of games. Prior to
joining Valve in 2005, Jason worked at ATI in the 3D Application Research Group
for 8 years. He received a BS in Computer Engineering from Case Western Reserve
University and an MS in Electrical Engineering from the University of
Cincinnati.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: A new real-time shading
model is presented that uses spherical cap intersections to approximate a
surface’s incident lighting from dynamic area light sources. This method uses
precomputed visibility information for static meshes to compute illumination, with
approximate shadows, from dynamic area light sources at run-time. Because this
technique relies on precomputed visibility data, the mesh is assumed to be
static at render-time (i.e. it is assumed that the
precomputed visibility data remains valid at run-time). The ambient aperture
shading model was developed with real-time terrain rendering in mind (see
Figure 1 for an example) but it may be used for other applications where fast,
approximate lighting from dynamic area light sources is desired.
Bios:
Chris Oat is a senior software
engineer in the 3D Application Research Group at ATI where he explores novel
rendering techniques for real-time 3D graphics applications. As a member of
ATI's demo team, Chris focuses on shader development for current and future
graphics platforms. He has published several articles in the ShaderX and Game Programming Gems series and has presented
at game developer conferences around the world.
Pedro
V. Sander is a member of the 3D Application Research Group of ATI
Research. He received his Bachelor's degree from Stony
Brook University, and his Masters and PhD in Computer Science at Harvard
University. Dr. Sander has done research in geometric modeling, more
specifically efficient rendering techniques and mesh parameterization for high
quality texture mapping. At ATI, he is researching real-time rendering methods
using current and next generation graphics hardware.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Abstract: An innovative lighting
algorithm is presented that allows scenes to be displayed with approximate
global illumination including ambient occlusion and sky-light effects at
real-time rates. The method is scalable for high polygonal scenes and requires
a small amount of pre-computation. The presented technique can be successfully
applied to dynamic and animated sequences, and
displays a striking aesthetic style by reducing traditional constraints of
physical correctness and a standard lighting model.
Bios:
Alex Evans started his career in the
games industry writing software renderers for innovative UK game developer
Bullfrog; after completing a degree at Cambridge University he joined Lionhead
Studios full time as one of the lead 3D programmers on the hit game Black & White. His passion is the
production of beautiful images through code - both in games, such as Rag Doll Kung Fu and Black & White, but also through his
work (under the name 'Bluespoon') creating real-time
visuals for musicians such as Aphex Twin, Plaid and
the London Sinfonietta.
Materials: Course notes chapter (PDF), Presentation Slides (PDF)
Contact: |
|