Honours project

An investigation of real-time deep G-Buffer applications in screen-space rendering


Intro

Deep G-Buffers is a 2-layer Layered Depth Image technique developed by Michael Mara, Morgan McGuire, Derek Nowrouzezahrai and David Luebke. They showed that with access to information for both a visible and occluded layer of scene information a variety of screen-space effects can be made more robust. The original paper demonstrated the effectiveness of deep G-Buffers in improving the stability of screen-space global illumination effects, including ambient occlusion and indirect radiosity. However, they state that deep G-Buffers have many other possible use cases such as stereoscopic reprojection, order-independent transparency and depth of field. My aim for the project was to investigate at least one of the suggested applications to serve as further evidence for the effectiveness of deep G-Buffers in screen-space rendering.

In the end, I focused on implementing real-time partial occlusion depth of field using deep G-Buffers. Partial occlusion depth of field is where sharp background objects bleed through the edges of blurry foreground objects.

I will now give a brief overview of how the project was implemented before going on to show the results. I am writing this overview a few months from when I finished the project so it’s not all that detailed. For a more coherent and detailed description of how the research/development was carried out, you can read the full Dissertation here.

Implementation

The project was developed using my own D3D11 framework. The code for the project can be found on GitHub here.

The framework included a basic forward renderer and simple debug interface. I then extended it to support the deferred screen-space rendering methods.

Deep G-Buffer generation

I implemented the single pass reprojection technique that was shown to provide the best balance between performance and quality. I also allowed for both layers of the G-Buffer to be generated using the simpler two-pass depth peeling method.

Contradictory to the original papers my comparisons showed that the depth peeling method was faster than the single-pass reprojection method. I am not sure why exactly this could be. Potentially there is an error in my code or geometry shader performance is much slower than expected. The full dissertation contains all of the data and analysis. Providing some more detailed reasons for why I believe the performance was so far from what I had expected.

Screen-space GI

I first implemented the GI techniques demonstrated in the original paper to use as a baseline performance comparison between my own and their findings. This is also served as a learning process to better understand how the deep G-Buffers can be utilised.

This turned out reasonably well showing similar results to the reference material.

My compare

Comparison of scene at different times of day with and without indirect lighting.

Partial occlusion depth of field

The partial occlusion method developed for this project is a modification of the Skylanders SWAP Force DOF method available here. When the shaded scene is split into near and far field textures, the far field image samples from the second layer of the deep G-Buffer filling in the areas that are occluded by the near-field image. This makes for a very simple algorithm that is relatively easy to implement and can be integrated into an existing DoF implementation. The flow diagram below shows how the algorithm works.

diagram

Flow diagram showing how the partial occlusion depth of field algorithm works.

The algorithms main cost is in shading the two layers of the deep G-Buffer which is required to avoid discontinuities between near and far field images. I theorised that a stencil could be created from the near-field image that could be used to mask some of the shading in the near frame. This would mean that time wasn’t wasted shading parts of the second-layer that weren’t going to be used potentially making the algorithm fast enough for use in a real application. More detailed information can be found in the dissertation, including some additional notes on how the near-field blur had to be modified to achieve the desired effect. There is plenty more time that I could spend improving the performance and quality of the final algorithm. However, I had to balance time spent programming and time spent writing my dissertation.

Below you can see a simple comparison between two images with standard DoF and partial occlusion DoF.

DoF dragon standard fill

Standard depth of field produces hard edges and no partial occlusion.

DoF dragon deep fill

Here you can see that the background dragon is partially visible through the tongue of the foreground dragon. This technique also produced much softer edges for blurry foreground objects

 

 

Conclusion

I am happy with the outcome of the project as I was able to work with some cool technology and in the end felt like I had made some discoveries of my own, however simple they were. In the future, I hope to spend some more time looking at the partial occlusion effect and seeing if I can’t improve it more. I also would love to get my work looked at by the original authors of the deep G-Buffer paper to see how they had imagined deep G-Buffers might have been used for depth of field. And, how they think I could improve my own technique.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s