Project Journal – AO demo frame analysis

Included with the deep G-Buffer research paper is a demo along with some source code. Working through the source code is a little tricky as it is quite a complicated framework. So instead I have decided to use RenderDoc graphics debugger to analyse a single frame of the application. I will list my findings here out of interest and for future reference. Understanding how the technique was originally implemented will help me in implementing it in my own framework.

The frame I am going to analyse is shown below.

Screen shot of ambient occlusion

It is just a simple display of the dual layer ambient occlusion. I will follow the frame through step by step to understand exactly how the image was produced. I have started with a simple example and will eventually move on to cover the whole process including AO, reflections and indirect lighting.

G-Buffer generation

As is normal with many applications it first begins by clearing all render targets and depth buffers. In this case, there are 9 different render targets and 2 depth buffers that are all cleared in preparation for the frame. A lot of these targets are currently redundant. Potentially because we are just looking at AO on its own. Or it’s just the nature of the engine that it binds unused render targets.

Anyway, after clearing all resources it generates the G-Buffer. Here the application is using the 2 pass depth peeling method rather than the single pass reprojection method mentioned in the previous post. This is likely used as their is no prediction involved in generating the second layer thus avoiding potential artefacts. This doesn’t affect how we look at the rest of the frame as we would have the same information either way just generated using different methods.

For the first layer generation, there are 6 total render targets bound. Render target 0 contains the view space normals while render target 1 contains diffuse colour. All other render targets appear to be empty bar render target 5 which is a uniform brownish colour.

First layer camera space normals

First layer diffuse colour

Only 2 render targets are bound for the second layer, in the same layout as before. 0 contains view space normals while 1 contains diffuse colour. This tells us that the uniform brown from the first generation is likely of little importance.

Second layer of camera space normals

 

Second layer diffuse colour

After looking at the shader bound to the pipeline it tells us that render target 5 contains screen space motion vectors. We can understand why this is brown by looking at the given colour (0.5, 0.5, 0.0) and deducing that it is this way because we are looking at a stationary image. Under movement, we would get varying values depending on how much the objects moved in the frame.The values for x and y are 0.5 because they have been modified to fit the [-1, 1] range into a normalised texture. On calculation, the values would have been 0 denoting no movement.

Now that the G-Buffer has been generated we can use it to calculate the AO.

Calculating Ambient Occlusion

What’s interesting is that pretty much everything I just explained is pointless as none of it is required for calculating ambient occlusion (Sometimes the normals are used in AO calculation). However, what was needed was the depth buffers for each of the layers which were automatically generated as part of the rasterization process. The depths of each of the layers are used to generate the camera-space Z coordinates for each pixel. In a simple pre-pass both depth buffers are taken as input and are used to reconstruct the camera-space Z coordinate for each layer which are then saved into both channels of an RG16F render target. (RG16F meaning a 16 bit per channel floating point format with only a red and green channel)

The depths for both the first and second layer are displayed below. The values have been scaled slightly to make the visualisation easier. Here darker values are closer to the camera.

First layer depth

Second layer depth

Visualising the generated camera space Z buffer would look similar to this in principle so is not shown.

Now that all required information is available to the application the ambient occlusion can be calculated.

The application uses a modified version of the Scalable Ambient Obscurance algorithm explained here. Funnily enough published by the same authors. The algorithm is complex and thus explanation of how it works will be left for another post.

The generated AO can be seen below. Everything looks red since it is just stored in a single channel of a texture. The final black and white image is just a visualisation of this calculated AO term.

Single channel ambient occlusion

As you can see the generated image is quite noisy. To reduce the noisyness of the image it is put through a seperable Gaussian blur.

Progressive blur

As you can see from this trio of images the noise is reduced quite heavily.

And finally this blurred AO is converted to greyscale to generate the final image. (Plus some UI)

Conclusion

The process is relatively simple. The most complex part is the AO generation. I do feel that by taking a look at how the application works I understand more clearly what I need to add to my own engine. In simple terms, the process can be summarised in the following steps.

  • Generate deep G-Buffer
  • Convert depth to camera-space Z
  • Generate AO
  • Blur

After the final blur, the AO texture is ready for use. In the following post, I will go into a deeper explanation of how the AO generation works and what was modified to make it work with deep G-Buffers.

Leave a comment