Project Journal – Smoothing

A bit of a weird title but smoothing is a theme of the work I have been doing this week. As stated in last weeks entry I said at the end that I would improve on quality and performance of the AO as well as begin work on an antialiasing solution for the project.

And as promised I have managed to get both tasks completed so wanted to briefly discuss some changes and show off some results.So without further ado, I will begin with a look at the changes made to the AO pass.

Bilateral Blur 

The generated ambient occlusion I showed in the last post was accurate but very noisy. This is in part due to a low sample count (only 6 samples at the time) and that there was no smoothing applied. With any AO algorithm, there will always be some noise as to get complete convergence would require really high sample counts which we can’t afford for real-time applications. So, to combat this we apply a blur to the calculated output to smooth out the AO into undersampled areas. For this, we use a bilateral blur filter. A bilateral filter is one that smooths noise while preserving hard edges. This is important as although we want our AO to be smooth we also want it to be crisp. We don’t want occlusion leaking between foreground and background objects as this is physically incorrect and would cause temporal instability as different objects overlapped. Since this project is all about screen-space temporal stability we want to ensure that we can avoid this at all costs.

The bilateral filter works by comparing the depths of the sampled value in our kernel with the depth of the central pixel we are calculating the blur for. We then scale the contribution that the sample has depending on how similar the depths are. For major depth discontinuities close to edges the background pixels will have zero contribution to the final value avoiding leaking between foreground and background objects. The blur used is a seperable Gaussian blur with a width and height or 15 pixels. This is quite a wide kernel so produces really smooth noise. On top of this to improve total quality the number of samples has been pushed to 20 instead of 6. This takes the total evaluation time for the AO up to 1.3ms from 0.43ms (including the two blur passes the total time is roughly 1.6ms). These values are still with optimisations disables so they could improve marginally in a release build.

Below are captures of a frame showing the difference between each blur step.

Comparison of each stage of the blur

I suggest opening the picture in a new window to see the full res image. Hopefully, you can see how the AO has been smoothed out substantially by the final image. Below is a close-up comparison of one of the leaves. Showing how the sharp edge has been preserved through the blur.

Close up comparison of depth

It looks a little blurrier that it should as the image has been scaled up. However, you should be able to see that the crisp edge of the leaf has been preserved by the end of the blur while the noise on its surface has been smoothed out.

You may also be saying to yourself “Damn those are some crisp edges!”. This is due to the Temporal Supersampled Antialiasing (TSAA). Which is the topic of next section,

Bonus

Just as a quick little extra piece of info before moving on to the next section. I was finding that there were some ugly artefacts produced when blurring the AO on first implementation. I knew that this had something to do with lacking depth precision giving us slightly odd blur weights in some areas. Since the AO is stored in a four channel texture where Red is the AO term and Blue is our normalised depth we have only given 8 bits of precision to our depth, giving us only 256 discrete values. For large scenes, this is nowhere near enough and thus produces these weird artefacts. So after reading a bit. Specifically here in the algorithm overview. I found out that we could pack 16bit precision depth into two channels of an 8 bit per channel texture. This gives us another 65,000 potential values and we don’t use up any additional memory as those channels were there but never used. Below is a comparison of 8bit precision and 16bit precision respectively.

Comparison of depth precision in AO

In example 1 there is a streak on the left pillar due to a large jump in depth value on a smooth surface giving us unwanted contrast.
In example 2 there is a similar streak.
In example 3 you can see that with greater precision we get greater edge preservation. Giving us overall crisper AO at edges.

These are great improvements for no more than 3 minutes work.

 

TSAA

Because we are using a deferred renderer hardware Multisampled Antialiasing (MSAA) is not feasible due to the additional memory requirements. So instead we need to apply our own AA as a post process.

Temporal Supersampled Antialiasing works by applying sub-pixel offsets to the camera projection and then accumulating the current frame with the previous frame. This allows us to spread the cost over every frame so the additional overhead per frame is tiny. This makes it both simple to implement and easy to integrate into any pipeline. This simple average between frames works well for static scenes however when the camera begins to move we get what’s called “Ghosting”.

Ghosting

This happens because the position of the colour for the current frame is in a different position than the previous frame so we accumulate with the wrong colour. To overcome this we need to know where the current pixel was in the last frame. Well, luckily for us, we have already calculated the previous location when sampling the depth for generating the deep G-Buffer. So in our final accumulation stage, we just need to offset our sample location by the screen-space velocity of the current pixel that we calculated earlier. Making this even simpler of a task.

The results are also very impressive. Below is a comparison of the same scene with AA disabled and AA enabled. (It’ll be easier to see if you open the image in another tab)

Teapot AA compare

Around the edge of the teapot is a lot smoother as well as the edges of the fabric in the background. Although aliasing is not always obvious AA helps to improve the smoothness of the overall image.

Conclusion

I didn’t want to go into too much detail about TSAA as it is not a core part of this research but is interesting to look at. The results are pretty impressive. However, as the camera projection jitters, it can make some parts of the image flicker as pixels change colour. A more complex version of the algorithm has been used in Unreal Engine 4 which can help to avoid this flickering as well as improve the general quality of the final image. At the time it is not all that important but if there is time spare towards the end of the project I hope to further look at adopting UE4’s technique to get the best possible quality.

One remaining issue with the AO that I would like to remedy is the banding that you can see on smooth surfaces. As can be seen on the pillars either side of the lions head.

banding

This happens because we are using face normals for the AO pass rather than smoothinterpolated normals. I will switch to using interpolated normals and see how it performs. We were using face normals as it produces greater accuracy normals than can be stored in an RGBA8 texture however smooth normals would be preferable over accuracy.

I am now in a position where I feel ready to start work on the indirect illumination. This will be a lot of work but I am excited to get some pretty looking images. I am already quite impressed by how good the current pictures look so I am looking forward to see how far I can push it. This will also include finding even more assets to include other than the teapot and Sponza.

The indirect illumination work will start in a similar way to how I did AO. I will first capture a frame from the demo to see how it works in the demo. I then hope to get it working for single bounce illumination before moving on to multiple bounces. These topics will likely be the themes for the next three entries in this journal.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s