In the last post, I said that the next post would be a thorough explanation of the AO algorithm. I am afraid I have lied to you as this post is not about AO. I will, however, eventually follow up with more details on my previous post just not yet. For now, I would just like to quickly give an update on my progress over the last week and how the project is shaping up.
So Looking at the estimated project schedule I submitted as part of the proposal. As of the 24th of Feb, I should be finished the full implementation of AO, Indirect lighting and ray traced reflections. Since I haven’t even properly started the AO implementation it is quite easy to see that my initial estimates were a tad ambitious. I think when planning this schedule I totally disregarded the fact that I would have other modules to focus on. I also hadn’t really thought about finding a job, which has been where quite a lot of my time has been spent this semester. However, I feel like things are starting to fall into place. And I believe that I should be able to make rapid progress through these tasks if I apply myself properly.
I will now sum up the progress I have made over the last couple of days. Giving insight into how I think the remainder of the work will play out.
So far everything I have demonstrated has used the Utah teapot. Proper testing of my implementation will require a scene far more complex that a single teapot. So, I have managed to convert and import the Crytek Sponza test model. This is important as it is the scene used in the demo provided with the paper. This will allow direct comparisons between the two projects.
On top of getting the model imported, I have made some good progress with the deferred shading. Taking the same approach as I had achieved with the forward lighting. I have attached some images below to demonstrate these accomplishments.
Here you can see two different viewpoints. The first image is the scene shaded, while the two images after are the camera-space normals for the first and second layer respectively. The lighting isn’t all that great and I am still pretty sure there are some errors in the shading. However, I am pleased that everything is working and we are getting some interesting images. Seeing two-layer separation working for complex models is exciting. I am interested to see how the project progresses from here.
Everything is now in a good place to begin work on the implementation of Ambient Occlusion. I have all necessary data stored in the G-Buffer, ready to start post-processing. So as promised I will follow this up with some additional information about the AO algorithm.