Course Description

Modern video games employ a variety of sophisticated algorithms to produce ground-breaking 3D rendering pushing the visual boundaries and interactive experience of rich environments. This course brings state-of-the-art and production-proven rendering techniques for fast, interactive rendering of complex and engaging virtual worlds of video games.

 

This year the course includes speakers from the makers of several innovative game companies, such as Bungie, Electronic Arts / Frostbite, Guerrilla Games, Studio Gobo, Remedy, Ready at Dawn, MediaMolecule, Epic Games, and Ubisoft Entertainment.

 

This is the course to attend if you are in the game development industry or want to learn the latest and greatest techniques in real-time rendering domain!

 

Previous years’ Advances course slides: go here.

 


Syllabus

Advances in Real-Time Rendering in Games: Part I

Monday, 10 August 9:00 AM - 12:15 PM | Los Angeles Convention Center, Room 515AB

Advances in Real-Time Rendering in Games: Part II

 

Monday, 10 August 2:00 PM - 5:15 PM | Los Angeles Convention Center, Room 515AB

 

Prerequisites

Working knowledge of modern real-time graphics APIs like OpenGL or Direct3D and a solid basis in commonly used graphics algorithms. Familiarity with the concepts of programmable shading and shading languages. Familiarity with shipping gaming consoles hardware and software capabilities is a plus but not required.

Intended Audience

Technical practitioners and developers of graphics engines for visualization, games, or effects rendering who are interested in interactive rendering.

Advances in Real-Time Rendering in Games: Part I

 

9:00 am
Natalya Tatarchuk
Welcome and Introduction

 

9:10 am

Sébastien Hillaire (Electronic Arts / Frostbite)

Towards Unified and Physically-Based Volumetric Lighting in Frostbite

 

9:40 am

Tomasz Stachowiak (Electronic Arts / Frostbite)

Stochastic Screen-Space Reflections

10:10 am

Andrew Schneider (Guerrilla Games)

The Real-time Volumetric Cloudscapes of Horizon: Zero Dawn

 

10:50 am

Huw Bowles (Studio Gobo), and Daniel Zimmermann (Studio Gobo)

A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Rendering

 

Huw Bowles (Studio Gobo) and Beibei Wang (Studio Gobo)

Sparkly but not too Sparkly! A Stable and Robust Procedural Sparkle Effect

11:35 am

Ari Silvennoinen (Remedy), Ville Timonen (Remedy)

Multi-Scale Global Illumination in Quantum Break

12:15pm

Closing Q&A

Advances in Real-Time Rendering in Games: Part II

 

2:00 pm
Tatarchuk
Welcome (and Welcome Back!)


2:05 pm
Matt Pettineo (Ready at Dawn)

Rendering The Alternate History of The Order: 1886

2:50 pm
Alex Evans (MediaMolecule)

Learning from Failure: a Survey of Promising, Unconventional and Mostly Abandoned Renderers for ‘Dreams PS4’, a Geometrically Dense, Painterly UGC Game’

 

3:35 pm
Daniel Wright (Epic Games)
Dynamic Occlusion with Signed Distance Fields

4:20 pm
Ulrich Haar (Ubisoft Entertainment), Sebastian Aaltonen (Ubisoft Entertainment)

GPU-Driven Rendering Pipelines

5:15 pm
Tatarchuk

Closing Remarks

 

 

Course Organizer

Natalya Tatarchuk is an Engineering Architect currently working on state-of-the art cross-platform next-gen rendering engine and game graphics for Bungie’s latest title Destiny and its future releases. Previously she was a graphics software architect and a project lead in the Game Computing Application Group at AMD Graphics Products Group (Office of the CTO) where she pushed parallel computing boundaries investigating innovative real-time graphics techniques. Additionally, she had been the lead of ATI’s demo team creating the innovative interactive renderings and the lead for the tools group at ATI Research. She has published papers and articles in various computer graphics conferences and technical book series, and has presented her work at graphics and game developer conferences worldwide.

 


 

Towards Unified and Physically-Based Volumetric Lighting in Frostbite

 

Abstract: Rendering convincing participating media for real time applications, e.g. games, has always been a difficult problem. Particles are often used as a fast approximation for local effects such as dust behind cars or explosions. Additionally, large scale participating media such as depth fog are usually achieved with simple post-process techniques. It is difficult to have all these elements efficiently interacting with each other according to the lights in the scene.

 

The authors propose a way to unify these different volumetric representations using physically based parameters: a cascaded volume representing extinction, voxelization method to project particles into that extinction volume, a simple volumetric shadow map that can then be used to cast shadow from any light, according to every volumetric element in the scene, and finally a solution to render the final participating media.

 

The presented set of techniques and optimizations form the physically based volumetric rendering framework that will be used for all games powered by Frostbite in the future.

Presenters:

Sébastien Hillaire (Electronic Arts / Frostbite)

Bios:

Sébastien Hillaire obtained his PhD in computer science from the French National Institute of Applied Science in 2010 during which he focused on using gaze tracking to visually enhance Virtual Reality user experience. After Dynamixyz and Criterion Games, he joined the Frostbite team as a rendering engineer. You can find him pushing the visual quality of the Frostbite engine in many areas such as volumetric rendering, visual effects or post process.

 

Materials:
Updated August 21st 2015

PPT with embedded videos (158 MB)

 

 

 


 

Stochastic Screen-Space Reflections

 

Abstract: In this talk we will present a novel algorithm for rendering screen-space reflections. Our technique robustly handles spatially-varying material properties, such as roughness and normals. It faithfully reproduces specular elongation of microfacet BRDFs, and seamlessly blends with other physically-based rendering techniques. To accomplish this, we use Monte Carlo integration coupled with several variance reduction methods.

 

Through filtered importance sampling, we achieve physically correct, yet noisy results. By reusing rays across local neighborhoods we obtain results similar to tracing multiple rays per pixel, at a fraction of the cost. The same ray reuse scheme also allows us to raytrace at a reduced resolution, yet achieve full-resolution details. We use more rays in difficult areas of the image, thereby only paying a higher cost where noise reduction is necessary. Temporal reprojection provides a further reduction in variance and approximates multiple light bounces. In order to strike a good balance between quality and performance, we adaptively allocate rays between pixels, and use precise hierarchical raytracing where it matters.

 

The technique will be used in Mirror's Edge and multiple other Frostbite games.

 

Presenter:

Tomasz Stachowiak (Electronic Arts / Frostbite)

 

Bios:

Tomasz Stachowiak is a Rendering Engineer in the Frostbite engine team at Electronic Arts. He specializes in physically-based rendering, and is passionate about lighting, climbing the Uncanny Valley, and obsessing over the tiniest details. He previously worked at Creative Assembly, where he would spend his time polishing xenomorphs for Alien: Isolation.

 

Materials:
Updated August 21st 2015



PPT (58 MB), Video (87 MB)

 


The Real-time Volumetric Cloudscapes of Horizon: Zero Dawn

 

Abstract: Real-time volumetric clouds in games usually pay for fast performance with a reduction in quality. The most successful approaches are limited to low altitude fluffy and translucent stratus-type clouds. For Horizon: Zero Dawn, Guerrilla need a solution that can fill a sky with evolving and realistic results that closely match highly detailed reference images which represent high altitude cirrus clouds and all of the major low level cloud types, including thick billowy cumulus clouds. These clouds need to light correctly according to the time of day and other cloud-specific lighting effects. Additionally, we are targeting GPU performance of 2ms. Our solution is a volumetric cloud shader which handles the aspects of modeling, animation and lighting logically without sacrificing quality or draw time. Special emphasis will be placed on our solutions for direct-ability of cloud shapes and formations as well as on our lighting model and optimizations.

Presenters:

Andrew Schneider (Guerrilla Games)

Bios:

Andrew Schneider is the Principal FX Artist at Guerrilla in Amsterdam. He spends his time developing the cloud system for Horizon: Zero Dawn and creating FX assets and simulation tools for the FX team. Previously, he worked as a Senior FX Technical Director at Blue Sky Studios, where he developed the volumetrics and clouds pipelines for the Rio and Ice Age animated movies. His interests include simulation, lighting and volumetrics. He has previously given 3 Talks at SIGGRAPH from 2011 to 2013 and was nominated for the Annie Award for Best Animated FX in an Animated Feature in 2012.

 

Materials:
Updated August 26th, 2015

PPT (944 MB), PDF (5 MB)

 

 


A Novel Sampling Algorithm for Fast and Stable Real-Time Volume Rendering

 

https://lh6.googleusercontent.com/MuYKSz5R75lK-uESpolhHhstNgbF4Kn5A70i1pXwdx9Z6QUvONLeHM2FtFF9Loj48_iXqqPPcRTkBAiOM8hWR0BNtAJvOkfXMBa8BF_M_sJY0XXaUd18gbc0JLuSO62idgl8foI

Abstract: Volumetric effects such as clouds add a third dimension to the rendering process as they require multiple volume samples to be shaded per pixel. In real-time applications such as games, performance constraints often mean samples are few and far between, resulting in severe aliasing under camera motion that is difficult to address. In this work we assume the volume is under-sampled and present a method to eliminate noticeable aliasing by holding volume samples stationary in the volume. We also extend this basic technique to adaptive sampling distributions, where samples are distributed as 1/z to maximise quality near the viewer without sacrificing draw distance. Our results include an expansive cloudscape that is temporally coherent and stable under camera motion, with full source code.

 

Presenters:

Huw Bowles (Studio Gobo), Daniel Zimmermann (Studio Gobo)

Bio:

Huw Bowles is a lead developer and researcher at Studio Gobo, based in sunny Brighton/UK, which most recently created the Rise Against the Empire playset for Disney Infinity 3.0. Prior to joining Gobo he has previously worked at Disney Research and Disney Interactive/Black Rock Studios on a number of graphics-related projects. His professional interests include real-time rendering, a number of areas of animation, and gameplay design/programming.

 

Daniel Zimmermann works at Studio Gobo in Zurich as a research scientist and engineer. He’s interested in all sorts of real-time applications, in particular in rendering and interactive animation approaches. He was leading the research and development of the AT-AT and AT-ST walkers in Disney Infinity 3.0 for which he implemented and extended novel physics-based animation techniques. For Disney Infinity 1.0 he was part of the team that developed new techniques for ocean rendering. Before joining Gobo, he worked with Disney Research on physics-based control for simulated characters.

 

Materials:

Updated August 24th, 2015

PPT (with embedded videos) (44 MB)

Github code project: https://github.com/huwb/volsample (A Unity 5 project that can be run immediately)

 


 

Sparkly but not too Sparkly! A Stable and Robust Procedural Sparkle Effect

 

https://lh3.googleusercontent.com/IdVvZ70i9OD_F-Mmtal4I2tdRPwPWz9RksC29gJC_sQ7r4I-XrlIFGl3x5z4OpielNBE7rqCbgA_29V-BCkRmiDWZn_gF94lNRmZP7WxqVv9ZX99iskTg9RHrbfmcnzyFG3Nu0k

Abstract: We recently worked on a snow sparkle effect for a AAA console title. Due to a number of practical considerations we implemented a procedural grid based sparkle, which intersects the snow surface with a jittered 3D grid of sparkle shapes. While this worked well for simple scenes and depth ranges, it took a thorough analysis and some deep thinking to make it robust and suitable for use in production. In particular aliasing was a significant issue and required specific treatment to ensure the frequency content was suitable at every pixel independent of depth. In this talk we will illustrate the various sources of aliasing and present solutions for each case. The lines of thought that led us to our final solution are general in nature and are likely to apply to other procedural shader effects. The end result of our work is an anti-aliased, stable sparkle over the entire range of depths. The artists could comfortably drive down the sparkle size to the order of ~1 pixel without worrying about noisy flickering or other aliasing problems.

 

Presenter:

Huw Bowles (Studio Gobo),  Beibei Wang (Studio Gobo)

Bio:

Huw Bowles is a lead developer and researcher at Studio Gobo, based in sunny Brighton/UK, which most recently created the Rise Against the Empire playset for Disney Infinity 3.0. Prior to joining Gobo he has previously worked at Disney Research and Disney Interactive/Black Rock Studios on a number of graphics-related projects. His professional interests include real-time rendering, a number of areas of animation and gameplay design/programming.

 

Beibei Wang works at Studio Gobo focusing on real time rendering techniques. Beibei got her Ph.D.  degree in Computer Software and Theory from Shandong University in 2014 and worked in Telecom ParisTech as a visiting student from 2012 to 2014. She focused on offline global illumination algorithms in her thesis.

 

Materials:

Updated August 18th, 2015

PPT (30 MB)

RenderMonkey (install link) shader example workspace (64 KB)

 


Multi-Scale Global Illumination in Quantum Break

Abstract: This talk will cover Remedy’s approach to multiscale global illumination in Quantum Break. Firstly, we present an efficient voxel tree structure and demonstrate its applications to world-space global illumination and to automatic specular probe generation using local visibility analysis. Secondly, to complement the large-scale illumination, we present our screen-space lighting solution which handles small scale ambient occlusion, reflections, and indirect lighting.

 

Presenters:

Ari Silvennoinen (Remedy), Ville Timonen (Remedy)

Bio:

Ari Silvennoinen is a graphics programmer at Remedy, where he works on research and development of new rendering techniques. Prior to joining Remedy in 2013, he obtained a master’s degree from the University of Helsinki and worked as a principal programmer at Umbra Software for 7 years. His main interests are in global illumination, visibility algorithms and real-time rendering and he has contributions in graphics conferences and journals, including SIGGRAPH, I3D, CGF and EGSR. 

 

Ville Timonen joined Remedy as a graphics programmer in 2014, and has developed screen-space techniques for lighting and shadowing for Quantum Break. Prior to joining Remedy, he received his PhD in computer graphics in 2014 from Åbo Akademi University, Finland. His main interests are in high performance rendering algorithms and computing, and has contributions in graphics journals and conferences including CGF, Eurographics, EGSR, and HPG.

 

Materials:
Updated August 24th, 2015

Hi-res PDF (110 MB), Low-Res PDF with Slide Notes (14 MB)

 

 


 

Rendering the Alternate History of The Order: 1886

 

Abstract: In this session, the author will present details on the in-house rendering technology that was developed for The Order: 1886. The talk will cover various core technologies that were integral to developing the game’s visual style, including antialiasing, cinematic post-processing, shadowing/occlusion, decals, and character rendering. In addition, the author will provide insights into the methodology used by the rendering team ensuring that the game’s technologies would meet the artistic and performance requirements of the title.

Presenter:

Matt Pettineo (Ready at Dawn)

Bio:

Matt Pettineo is currently a Lead Graphics and Engine programmer at Ready at Dawn Studios, where he recently finished work on The Order: 1886 for the PlayStation 4. His personal blog, The Danger Zone, is home to many articles and code samples that explore various aspects of real-time graphics development. He was also a contributor for OpenGL Insights, and co-authored the book ​Practical Rendering and Computation with Direct3D 11”.

 

Materials:
Updated August 21st 2015

PPT (70 MB)

 

 


 

Learning from Failure: a Survey of Promising, Unconventional and
Mostly Abandoned Renderers for ‘Dreams PS4’, a Geometrically Dense, Painterly UGC Game’

 

Abstract: Over the last 4 years, MediaMolecule has been hard at work to evolve its brand of ‘creative gaming’. Dreams has a unique rendering engine that runs almost entirely on the PS4’s compute unit (no triangles!); it builds on scenes described through Operationally Transformed CSG trees, which are evaluated on-the-fly to high resolution signed distance fields, from which we generate dense multi-resolution point clouds. In this talk we will cover our process of exploring new techniques, and the interesting failures that resulted. The hope is that they provide inspiration to the audience to pursue unusual techniques for real-time image formation. We will chart a series of different algorithms we wrote to try to render ‘Dreams’, even as its look and art direction evolved. The talk will also cover the renderer we finally settled on, motivated as much by aesthetic choices as technical ones, and discuss some of the current choices we are still exploring for lighting, anti-aliasing and optimization. 

Presenter:

Alex Evans (MediaMolecule)

Bio:

Alex Evans is a co-founder and Technical Director at MediaMolecule, creators of LittleBigPlanet, Tearaway, and most recently, Dreams. His interests are in ways to achieve the studio’s unique artistic goals through unconventional real-time-rendering techniques & tools – especially when they are put in the hands of millions of creative users. This journey began at a young age in the Demoscene (as ‘statix’ and ‘bluespoon’), and evolved at Bullfrog Productions (Dungeon Keeper et al), Lionhead Studios (Black & White et al), and Media Molecule. He’s also worked with Warp Records, London Sinfonietta and Flat-E, creating real-time computer graphics to be performed alongside live musicians. A veteran of the Advances in Real-time Rendering course, expect a mash-up of techniques and an exploration of the less well travelled parts of real-time graphics.

 

Materials:

Updated August 12th, 2015

PDF (230 MB)

A compressed 34MB version is also available here (although you really should feast on the high res renders if you can!).

 


 

Dynamic Occlusion with Signed Distance Fields

 

Abstract: This talk will present methods for computing occlusion from dynamic scenes by leveraging signed distance fields, targeted at current generation consoles.  In the world of real-time graphics traditionally dominated by triangle rasterization, ray tracing signed distance fields makes it possible to efficiently solve incoherent cone visibility queries.  These queries are the foundation for sky occlusion, medium range ambient occlusion and large scale area shadows in dynamic worlds.

Presenter:

Daniel Wright (Epic Games)

Bio:

Daniel Wright is a Senior Graphics programmer at Epic games.  He was a driving force behind the rendering features and performance of the Unreal Engine 3 and 'Gears of War' series.  Today he is working on rendering technology in Unreal Engine 4, with a passion for lighting.

 

Materials:
Updated August 21st 2015

PPT (23 MB), PDF (5 MB)

 


 

GPU-Driven Rendering Pipelines

Abstract: The first half of the talk will present the GPU-driven rendering pipeline of Assassin's Creed Unity – co-developed by multiple teams at Ubisoft Montreal – that was designed to efficiently render the game’s complex scenes, containing many highly modular buildings and characters.

After a brief introduction, we will describe the core of the pipeline, which supports per-material instance batching instead of the more traditional per-mesh batching. We will then show how this can be combined with mesh clustering to obtain more effective GPU culling, despite coarser draw call granularity. Additional techniques such as shadow occlusion culling and pre-calculated triangle back-face culling will also be discussed.

In the second half of the talk, we will introduce the RedLynx GPU-driven rendering pipeline: a `clean slate’ design that builds on the latest hardware features, such as asynchronous compute, indirect dispatch and multidraw.

Our aim from the outset was to support heavily populated scenes without CPU intervention, using just a handful of indirect draw calls. In practice this allows us to render hundreds of thousands of independent objects with unique meshes, textures and decals at 60 FPS on current console hardware. We will go into of all the details on how we achieve this, including on our novel culling system, as well as virtual texturing, which is an integral part of the pipeline.

Finally, to wrap up the talk we will look at how our pipelines could evolve in the future, especially with upcoming APIs such as DirectX 12.

Presenter:

Ulrich Haar (Ubisoft Entertainment), Sebastian Aaltonen (Ubisoft Entertainment)

Bio:

Ulrich Haar got his industry start in 1999, working on various games for small, independent studios. Since joining Ubisoft in 2007, he has served as a 3D programmer and tech lead on Tom Clancy's Rainbow 6: Vegas 2, Tom Clancy's Rainbow 6: Osborn, and Assassin's Creed Unity.

 

Sebastian Aaltonen started his professional career at RedLynx 12 years ago. Sebastian is leading the RedLynx rendering technology team and has been the driving force behind the RedLynx GPU-driven rendering technology. Since RedLynx joined Ubisoft, Sebastian has been actively sharing rendering related ideas with other Ubisoft teams and has been speaking at several internal Ubisoft conferences.

 

Materials:

Updated August 12th, 2015

PPT (13 MB), PDF (4 MB)

 

 

 

 

Contact: