Project Journal – Single pass deep G-Buffer generation

At the start of the week, I spoke about my current progress with the project. I briefly covered deep G-Buffer generation basics. I will now dive a little deeper and give a few more details on just how the algorithm works. Including some in-engine screenshots.

Just as a disclaimer before we begin. All of this information comes directly from the original papers and project source code. What I describe here is all available there in more detail.

So last week I talked about how the rendering is set up. I mentioned how with DirectX we can draw to multiple render targets from a single shader. I also mentioned that when working with deep G-Buffers each render target holds an array of textures that we draw each of our layers to (i.e. RenderTarget1[0] Holds the diffuse colour for the first layer of objects and RenderTarget1[1] Holds the diffuse colour for the second layer of occluded objects).

To split our scene into layers we need to be able to detect in our pixel shader which objects to draw to our second layer. To do this we can take the depth of the first layer and compare the depth of the current fragment with the depth at the same coordinates in the first layer. If the current fragment is further away from the camera then we can draw it otherwise the fragment is discarded. Now this works fine if we are generating our deep G-buffer over multiple passes as we can take the depth buffer from the first pass and use it in our second pass. However, we want to generate both layers in a single pass meaning we don’t have access to the first layers depth buffer for this frame. So, how do we overcome this? A few different techniques are presented in the paper that solve this issue, the one I will be explaining here is the reprojection method.

The reprojection method works by sampling the depth from the previous frame. As you can imagine during fast movement this will cause artefacts as the sampled depths from the previous frame no longer accurately represent the depth in the current frame. So what we do to overcome this, is take the transforms from the previous frame to calculate how much the current fragment has moved allowing us to sample accurate depths from the previous frame. This can still cause issues around fast moving objects but for most cases, this technique gives us the most accurate separation. One other technique worth mentioning is the delay technique. This adds an additional frame of latency giving us access to the vertex positions for frame t+1. Allowing us to perfectly predict the depth of the first layer. Although this always produces the correct result it does require adding that additional latency which is often best avoided. Since we are just rendering stuff and there is little user interaction the latency would likely not be very noticeable. However, we want to re-create what they produced in the paper so we will continue to use the reprojection method. We will make use of the screen space velocity for other effects so that’s an extra bonus.

The pipeline for generating the deep G-Buffer looks something like this.

Vertex Shader
Calculate camera space position for the current frame.
Calculate camera space position from the previous frame.
As usual, calculate the projected screen space position of the vertex.
Calculate application specific values

Geometry Shader
For each layer
For each vertex
Copy vertex data and add new vertex to new triangle
Submit the new triangle to the pixel shader

Pixel Shader
Calculate screen-space velocity of current pixel using positions from this and the previous frame
If we are drawing to the second layer
Using reprojected screen space coordinate sample depth from the previous frame
If the current pixel depth is less than the previous depth + minimum separation
Discard pixel
Output data to G-Buffer

The implementation can be found on the project GitHub here.

Below you can see both layers of the camera space normals render target. Here we are still rendering back faces to demonstrate the results of the algorithm. We will eventually cull all back faces as you can imagine access to the internal faces of an object is not all that useful.

teapot G-Buffer comparison

I will begin work to try to import more complex scenes. The current implementation works well for this single object, but needs to be tested for more complex scenes. Following that I want to add some additional debug tools to make it slightly easier to view the different layers of the different render targets from the deep G-Buffer. This will then put me in a good place to begin implementing the screen-space effects utilising the deep G-buffer.

Project Journal – New year update

I have not done as much on my project as I had hoped. I can now see how ambitious I was during the first semester (not really a bad thing as it gave me plenty to talk about). I was busy the first couple of weeks catching up with the maths work and applying for jobs. Over the past week, I have started to make serious progress in my scheduled tasks, slightly catching up to where I should be.

By now I wanted to have deferred shading implemented and be most of the way through the deep G-Buffer generation algorithm. Since I spent a long time working on creating a stable framework I have found that it is not taking me as long as I had thought to complete these tasks. I was able to get a standard G-Buffer generated pretty quickly, and am now making good headway on the deep G-Buffer generation. Just before the holidays, I spent a good amount of time working on the lighting for the forward renderer, so hopefully, it shouldn’t take too long to get that ported to work with the deferred pipeline.

I have spent some time looking through the deep G-Buffer source code included with the original paper. It includes everything I need to implement the algorithm in my own engine. There is only minor difficulty porting it from OpenGL to D3D. With this information available I should be able to make significant progress.

I will now give a quick explanation of how the algorithm works.

Standard G-Buffer generation is pretty simple. Instead of submitting a single render target to the device we instead submit an array of identical render targets that we write the different components to. Currently, I have two render targets; one for the diffuse/albedo colour and another for the view/camera-space normals (An additional target will be added with specular and roughness values after the deep G-Buffer generation is implemented). These render targets are then passed to a shader that reads the data and then calculates the lighting at each pixel in the G-Buffer. This is advantageous to forward rendering as we only calculate lighting for visible objects.

For the deep G-Buffer algorithm, each render target in the G-Buffer is created as an array of textures. So we are now submitting an array of render target arrays to the device. Giving us two layers of diffuse/albedo and two layers of normals. We also generate two layers of depth at the same time, as the depth buffer is also an array of textures. Writing to multiple render targets is relatively easy. Instead of outputting a single float4 from our pixel shader we instead output a struct with the different values we want to write to our render targets. This is done using the SV_Target[n] semantic. Writing to different slices of the render target arrays requires slightly more work. To allow for this we need to add a geometry shader to our pipeline. From there we can select which index of the array we want to draw to using the SV_RenderTargetArrayIndex semantic. We are required to duplicate the geometry so that we draw the same primitives to both layers of the render target.

The majority of the deep G-Buffer algorithm is then performed in the pixel shader. When drawing to the second layer we filter out the closest primitives instead drawing all the second closest primitives. Writing out G-Buffer data as we would for a single layer. From here we now have two G-Buffers one containing all the closest objects and one containing all the second closest objects. We can now pass all of the textures to a shader that can read from both layers to calculate accurate screen-space effects as shown in the paper.

So in practice, it is a relatively simple algorithm. The problems come from selecting the correct constants to ensure that we get the objects that we want in the second layer. By changing the minimum separation we can include objects closer to and further away from the first layer. The distance that is needed will often depend on the complexity of the scene that is being drawn.

So far I have managed to draw to both layers of the deep G-Buffer. I now just need to implement the filtering to only draw the second closest objects in the second layer. This might take a little while to get working, but hopefully, the source code will help to accelerate this work. I hope to get the generation finished before the end of the week. I will write a second post to explain any issues or successes I have, hopefully including some pictures to help explain what’s going on.

Project Journal – Disney BRDF explorer

This week I was able to test the accuracy of my renderer using Disney’s BRDF Explorer. It was able to pinpoint some bugs and helped to explore how the implementation compared to Disney’s references.

The BRDF viewer (available here) is an application that allows the development and comparison of Bi-Directional Reflectance Distribution Functions (BRDFs). More info on BRDFs can be found here.

When looking at the objects rendered I could tell that something was wrong but was unsure as to what was causing the issue. I had heard of the BRDF viewer so wanted to give it a go. When comparing the functions I was able to tell that my implementation wasn’t properly responding to changes in view. I was able to quickly find that I had used the normal instead of the view-direction to calculate the half vector used in the calculations. After fixing this issue I was able to properly compare the different functions to ensure my implementation was accurate. The Disney principled BRDF is a slightly more complex version of the function I was using, but with the correct parameters, the functions were almost identical. Below is a GIF demonstrating the comparison between the two functions.

 

Here we can see the almost identical response to the change in incident angle. My implementation is shown in green while Disney’s is in red. The minor differences can be put down to the slightly different terms used in each function. However, the plot demonstrates the traits I expected from my BRDF. Below are plots for each of the different terms used in each function changing based on material roughness and view angle.

 

Distribution

Here you can see that the two distribution terms match identically for changing roughness values.

 

Fresnel

 

Here the values are offset slightly to make each plot visible. Mine is shown in purple while Disney’s is in blue. You can see how both functions show the exact same response as the angle of incidence approaches 90 degrees.

 

Geometry

 

Here you can see quite major differences in the two geometry terms used. Mine is shown in purple and Disney’s in red. They are both using the Smith-GGX approximation. However, Disney’s function remaps roughness into the [0.5, 1] range. Why it does this I am unsure, it is likely down to a visual preference. Often there isn’t a single correct choice and it is often down to which appears more perceptually correct. I will continue to mess around with these functions on lit objects to determine which I prefer. I will not spend too much time on this, as the difference is very marginal.

Project Journal – Feasibility demo

On Monday this week was my feasibility demo. It was a pretty relaxed conversation with my supervisor, going over some of the work I have done so far. The key points were the literature review and the current progress made with the engine. Overall I think it went well. She seemed impressed with what I had achieved so far and I think I was able to demonstrate my understanding of the subject matter.

In preparation for the demo, I created an online plan for the project. The project is split into stories for each feature that needs to be implemented. The stories are then split into tasks with time estimates for completion. This gives clear deadlines for each deliverable feature. The plan should help to keep the project on track. Each morning I will be able to update the hours spent on each task and the expected time left to completion. You could spend an infinite amount of time trying to perfect each feature, keeping track of the time spent will help to stop this.

The assessment brief states that the purpose of the feasibility demo is to “demonstrate progress in your project to prove that it is at a stage that deems it Feasible“. In previous feedback, I have been told that the project is very ambitious. I hope that the artefacts I demonstrated have shown that the project is feasible and that I am capable of completing the work. Looking at the assessment criteria I think I should be able to get a good grade. The four criteria are as follows:

  • Grasp of subject matter
  • Capacity for original and creative enquiry
  • Ability to critically evaluate, analyse and synthesise and integrate complex information
  • Communication skills
I have been able to show a grasp of the subject matter through the literature review and a discussion about the requirements for the framework. I think I was also able to communicate my own thoughts clearly. I am less certain about criteria 2 & 3. The literature review contains notes about each possible reference so in some way that demonstrates critical analysis and evaluation. The literature review also covers a wide range of information not directly related to the core area of research. Hopefully, this demonstrates creative enquiry but I’m not too sure if that’s what it means by creative enquiry. Anyway, I will find out if I have done enough in two weeks time.
This week I have spent some time working on the framework, fixing bugs in the renderer and adding additional debug options to the application. I will discuss these changes more in a separate journal entry.

Project Journal – Proposal feedback

Last week I was busy working on assignments so didn’t find the time to update my journal. So this entry will cover my thoughts from last week and this week.

Last Tuesday the grades came in for the project proposals. I managed to get 4.0/4.5 which I was happy with. My feedback said:

Excellent proposal. Might be a bit ambitious but a reasonable contingency plan is in place. The performance characteristics could have been explained in a little more detail. Some technical terms e.g oracles were introduced but not explained. The punctuation was a bit messed up around inserted references. Overall a well conceived and designed study.

I had mentioned in a previous post that I was worried that due to my growing familiarity with the subject I would fail to explain some sections in enough detail. I did really gloss over the “oracles” which are a key technical detail of the algorithm. When writing the dissertation I will break down the technical details for a thorough explanation. Focusing on the reasoning of why they are used trying to build up from their most basic components. This is going to be important in producing a good dissertation as the marker will need to understand what I am talking about without having to read through all related research papers. This is especially important when using jargon like “oracles”.

I also need to read over some Harvard referencing guides to ensure that I am referencing properly. I used quite a lot of references in the proposal so it was likely down to laziness that caused the punctuation issues. I will also look into software to help manage references for the dissertation as the additional work will require an even greater number of references than was used for the proposal.

It will take a lot of effort to get a better grade on the dissertation. Hopefully, if I can address these problems it should help me get there.

 

This week is the last week before the feasibility demo (which is scheduled for Monday 5th December). I am still not sure what exactly I will need to take for the demo. I have the framework which I have been working on which will be the key talking point. Also, the literature review will hopefully get me some bonus points. In the proposal, I included a Gantt chart that mapped out the expected completion dates for tasks. I am going to attempt to break each of the larger tasks down into smaller tasks to create a development plan for the entire project. These 3 items combined should help to prove that I understand the scope of the project and that I am capable and prepared to carry out the required work.

I have 2 weeks before I head back home where I no longer have access to a development machine. In this time I will attempt to get as far as possible with the project development. Focusing on improving the usability of the framework as well as getting started on the planned tasks. I planned 4-6 days to add a deferred rendering pipeline to the framework. I aim to hopefully get most of this completed in the available 2 weeks putting me at a time advantage when returning next semester.

To keep myself focussed over the Christmas break I will try to read some of the example dissertations available on blackboard. This should give me ideas of how I want to structure my own dissertation and get me in the mindset ready to begin work on it ASAP. The sooner I can get an early draft completed the more feedback I can get from my supervisor.

In next week’s journal, I will address any issues raised in the feasibility demo and discuss any development progress.

Project Journal – Ethics

Not much has happened this week with regards to my honours project. The lectures are now drop-ins and the workshops have finished. I did, however, have to fill out the ethics approval form.

Since my project does not require any human participation the ethics process was very simple. In the end, it took less than a minute. I can’t see any way that it would be rejected so everything looks like it is in good shape for the end of the semester. The only things left are the feasibility demo and the risk assessment form. I have already discussed the feasibility demo a couple of weeks ago, I will come back to discuss it closer to the deadline. The risk assessment form should be as simple as the ethics form. Most research projects conducted within AMG will likely be conducted at the university or at home so there is very little chance of risk. The risk assessment does need to be signed by the supervisor which will require meeting in person. As we are both busy at the moment this can wait until week 13/14 when we will need to meet for the feasibility demo.

Next week I should receive feedback on the proposal. I will discuss what I thought of the feedback and how I plan to address any major issues. Until then I will continue to focus on my other major assignment.

Project Journal – Proposal submitted

Yesterday was the deadline for proposal submission. I am glad that I have finally handed it in as it sets the project in motion. I got some great feedback from my supervisor and was able to make the required changes to the proposal hopefully setting it up to receive a good grade.

Personally, I was happy with the final draft. I believe I did a good job of introducing the project and setting the scene for future development. I feel like I gained a good understanding of the background material while writing the proposal. I believe that I managed to get this across. However, it can be easy to presume an understanding of areas that you are familiar with, potentially resulting in key details being left out of the explanation.

The methodology is the one section I am least pleased with, it didn’t really set a solid plan for the development of the project. There are still some areas that require further investigation leaving some vagueness about the methods to be used. Hopefully, this shouldn’t affect the outcome too badly as I believe I still gave a good explanation of the plan with the information currently available. I look forward to getting some feedback. Hopefully gaining additional insight into the areas I can improve. Specifically, I am hoping to get some feedback on how the project is written and structured, the explanation of the background details, the research methods proposed and the general feasibility of the project. Feedback in these areas will be useful when it comes to writing the dissertation.

As part of the proposal, I created a Gantt chart to plot out the timeline for the project. I feel it could be an ambitious estimation. However, I think with focused work I should be able to manage to hit all of the deadlines. I have attached an image of this chart below the green bars are the latest allowable deadlines for each feature. Showing the latest possible end date for the project. The orange bars are the estimated durations for each task and the vertical blue lines show the start and end of tasks running in parallel.

This is the first Gantt chart I have ever made. I have seen it used effectively in some example proposals and thought it would be a good way of showing the order in which I expect to tackle the project. Making a Gantt chart without any prior understanding of how they should be used or laid out is a little risky but will hopefully turn out well in the end.

No matter the grade I am excited to start the project and begin work on the dissertation. I have already learned a lot while researching for the proposal which I hope will continue in the following semester.

The last couple of weeks has been focussed on the honours proposal so I will need to spend the next couple of weeks focussing on other assignments. I will spend a small amount of time collecting the artefacts I deem necessary for the feasibility demo.

Project Journal – Supervisor feedback

Tuesday last week I had my first meeting with my supervisor. I was able to ask plenty of questions and got some great feedback on my proposal and the project in general.

The current scope of the project is relatively broad and very ambitious. For now, this is good as it gives me the flexibility to scale up or down (mostly down) what I want to achieve by the end of the year, it also gives me plenty to write about in the proposal. I have sent the first draft of my proposal to my supervisor which we will discuss at our meeting on Tuesday this week. This gives me a whole week to make alterations and move the content over to the template provided.

My current draft of the proposal is slightly over the specified word limit. I went into a lot of detail about the literature and background supporting the project. Since this is quite a technical project I think this is important as it shows my understanding of the topic and what it requires. I think that I included too much detail about some of the very early background material, which is very interesting but slightly less relevant. Cutting down on this older information would likely get me just under the word limit. It is also possible that I am not making the most of each available word, there is probably a lot of rambling that I can cut out.

Today was another round of project presentations. I slightly regret going in the first session as I now know far more about the project than I did then. However, the additional couple of weeks that I had to address feedback I was given was invaluable.

At the lecture today we were given the requirements for the feasibility demo in week 14. I have a good amount to show so far; The framework,  framework design document, a very detailed literature review and the timeline of expected feature completion dates. In the weeks leading up to the demo, I will find time to polish the debug features and attempt to implement a basic deferred rendering pipeline. In parallel with this practical work, I will likely need to produce a more professional set of design documents for the project. The current documents are very sloppy and were just used as a general guide for myself not intended to be read by others.

By the end of this week, I expect to have the final draft of the proposal completed. I doubt that I will have time to get additional feedback as this is the final week before the deadline. I will then continue work on my next assignment.

Project Journal – Proposal document

The deadline for finished proposal documents is now only two weeks away. Last week I spent a couple of hours every day working on the first draft of my proposal. This included time spent researching the proposal topic as well as writing the separate sections of the document.

Having to explain the research in the document has given me a better understanding of the background material. The project now seems a lot less daunting than it did a few weeks ago.

I have found that as I learn more about the background research my project idea warps slightly. I am continuing to write the proposal with the originally intended idea. Currently, I believe that the way I am writing the proposal is too broad and I need to focus it slightly. I will discuss this with my supervisor to see what they think is best.

To make writing the proposal easier I have split the different sections; Abstract, Introduction, Background, Methodology and Summary into their own separate documents. This avoids having to deal with the formatting required for the proposal template, making iteration slightly easier. Eventually, when I am happy with all of the sections I will move them over to the template format, adding references where necessary.

I haven’t continued any practical work this week and likely won’t until the proposal is finished. There are four weeks between the deadline for the proposal and the feasibility demo. I am still not certain what is required for the demo, but I believe that there shouldn’t be too much more to add to what I have done already.

At the start of the semester, I created a list of goals for the renderer that needed to be met before I could start work on the project. Most of these have been met, most of the remaining items are optional. They are as follows:

  • Improve the rendering quality (Shadows, Image based lighting, HDR, etc..)
  • Add post effects pipeline
  • Add deferred rendering path.

The deferred rendering path is the only task that is mandatory. This shouldn’t be too difficult to add but may not be of exceptional quality.

Next week I will continue to work on the proposal. I also hope to meet with my supervisor to discuss the project and hopefully get some feedback on the current draft of the proposal.

The final document and all sections will be available here when they are finished.