Project Journal – New year update

I have not done as much on my project as I had hoped. I can now see how ambitious I was during the first semester (not really a bad thing as it gave me plenty to talk about). I was busy the first couple of weeks catching up with the maths work and applying for jobs. Over the past week, I have started to make serious progress in my scheduled tasks, slightly catching up to where I should be.

By now I wanted to have deferred shading implemented and be most of the way through the deep G-Buffer generation algorithm. Since I spent a long time working on creating a stable framework I have found that it is not taking me as long as I had thought to complete these tasks. I was able to get a standard G-Buffer generated pretty quickly, and am now making good headway on the deep G-Buffer generation. Just before the holidays, I spent a good amount of time working on the lighting for the forward renderer, so hopefully, it shouldn’t take too long to get that ported to work with the deferred pipeline.

I have spent some time looking through the deep G-Buffer source code included with the original paper. It includes everything I need to implement the algorithm in my own engine. There is only minor difficulty porting it from OpenGL to D3D. With this information available I should be able to make significant progress.

I will now give a quick explanation of how the algorithm works.

Standard G-Buffer generation is pretty simple. Instead of submitting a single render target to the device we instead submit an array of identical render targets that we write the different components to. Currently, I have two render targets; one for the diffuse/albedo colour and another for the view/camera-space normals (An additional target will be added with specular and roughness values after the deep G-Buffer generation is implemented). These render targets are then passed to a shader that reads the data and then calculates the lighting at each pixel in the G-Buffer. This is advantageous to forward rendering as we only calculate lighting for visible objects.

For the deep G-Buffer algorithm, each render target in the G-Buffer is created as an array of textures. So we are now submitting an array of render target arrays to the device. Giving us two layers of diffuse/albedo and two layers of normals. We also generate two layers of depth at the same time, as the depth buffer is also an array of textures. Writing to multiple render targets is relatively easy. Instead of outputting a single float4 from our pixel shader we instead output a struct with the different values we want to write to our render targets. This is done using the SV_Target[n] semantic. Writing to different slices of the render target arrays requires slightly more work. To allow for this we need to add a geometry shader to our pipeline. From there we can select which index of the array we want to draw to using the SV_RenderTargetArrayIndex semantic. We are required to duplicate the geometry so that we draw the same primitives to both layers of the render target.

The majority of the deep G-Buffer algorithm is then performed in the pixel shader. When drawing to the second layer we filter out the closest primitives instead drawing all the second closest primitives. Writing out G-Buffer data as we would for a single layer. From here we now have two G-Buffers one containing all the closest objects and one containing all the second closest objects. We can now pass all of the textures to a shader that can read from both layers to calculate accurate screen-space effects as shown in the paper.

So in practice, it is a relatively simple algorithm. The problems come from selecting the correct constants to ensure that we get the objects that we want in the second layer. By changing the minimum separation we can include objects closer to and further away from the first layer. The distance that is needed will often depend on the complexity of the scene that is being drawn.

So far I have managed to draw to both layers of the deep G-Buffer. I now just need to implement the filtering to only draw the second closest objects in the second layer. This might take a little while to get working, but hopefully, the source code will help to accelerate this work. I hope to get the generation finished before the end of the week. I will write a second post to explain any issues or successes I have, hopefully including some pictures to help explain what’s going on.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s