Project Journal – Depth of Field

Introduction

So last week after finishing off the indirect lighting I decided the practical work was finished and that the rest of my time would be spent writing the dissertation. I stuck with this plan for the first couple of days, getting the first draft of the methodology and literature review finished. However, after that I found myself a little bit bored and a little bit stuck. I was finding it a little hard to write the other sections as it required an explanation of relevance and importance of my research. Although I’m sure it would be OK I felt a little cheap trying to justify the work already carried out in another paper. So I thought instead of spending time trying to reason why I hadn’t done my own thing I might as well just give it a go.

The paper mentions 4 different other applications where deep G-Buffers could be useful. Those areas were;

  1. Order-independent transparency
    1. Order-independent transparency (OIT) allows for a scene
      of transparent objects to be rendered without any prior
      depth sorting
  2. Stereoscopic re-projection
    1. Stereo image reprojection is used to generate a
      stereoscopic image from a 2D image using the available
      depth information
  3. Motion blur
    1. Motion blur approximates the slight blurring of objects during movements
  4. Depth of field.
    1. Depth of field simulates the focus of a lens in a camera.

 

Out of all of these, depth of field (DoF) was the one I was most familiar with as I created a DoF effect for a previous module in third year. It also creates the more appealing results visually which is always at the top of my criteria so it seemed like the perfect choice.

One DoF effect seen in the real world that is yet to be seen in games (as far as I can tell) is partial occlusion: Out of focus objects near the camera are semi-transparent resulting in partially visible background objects. This is due to wide aperture lenses on cameras where parts of the background object will hit the outer parts of the lens and thus contribute to the final image. As is shown in the below diagram. (taken from here)

 

I had a slight inkling as to how I could do this so thought it was worth a shot. Anyway, in the end, no matter if it works or not at least it gives me a little more to talk about in the dissertation.

 

Implementation 

There isn’t really much theory to be discussed since unlike the other techniques discussed DoF techniques generally use methods that produce the best looking results not the most physically plausible results. So, instead, we will get stuck into the implementation details.

The last time I implemented a DoF effect I used the methods displayed here in GPU Gems. The technique worked but is quite outdated compared to more modern techniques. So I instead opted to go for a technique I had recently read about, here. It turns out that this is another piece of work by Morgan McGuire so I owe him a lot for the amount of his work I have used in this project.

The technique works by separating the scene into two layers based on each pixels Circle of Confusion (CoC). The CoC is the projected circle that an out of focus cone of rays makes when it hits the camera. The diagram below hopefully does a better job of explaining how this looks. For us the CoC is a value that ranges from [-1, 1] that tells us how focused the point is and where it sits in the range of focus of the camera. -1 is the most blurry far point from the camera, 0 is in focus and 1 is the most blurry near point to the camera.

As you can probably guess the scene is split into a near region for CoC values between 1 and 0 and a far region for CoC values between 0 and -1. This allows for blurred edges on near field objects composited on top of crisp in-focus far field objects. This step gives us two textures that look something like this. The near field is on the left and the far field is on the right.

The next step of the process is blurring both layers. However, as we blur the near field the areas on the edges like the pillars will become slightly transparent. If we blur like this when we composite the layers back together we will get dark edges as there is no data in the near field to fill those transparent areas. So often you just include the near field values in with the far field texture. However, what we can do instead is include some data from the second layer of the deep G-Buffer to fill in the dark areas. This gives us the two potential far-field textures.

This means that when we blur the near layer it will be blended with the occluded object behind it. For example, in the top you can see that the back wall is filled in seamlessly where the pillars previously were.

The next blurring step is not all that interesting. We downsample to half resolution and perform a basic blur depending on CoC at the given pixel. The blurring algorithm needed quite a lot of tweaking to create the right effect (To be honest it’s still not perfect, it could definitely use some extra time). But eventually, after blurring has finished the two layers are composited back together to create the below image. On the left with standard far-field texture and on the right with deep G-Buffer filled far field texture.

Now looking at these two images it is impossible to tell the difference as the blur is only very slight however if we look at a more extreme example the difference becomes very noticeable.

Hopefully, you can see on the right the edges on the close dragon are slightly softer and there is a little bit more of the background visible in some areas (The red curtain is more visible through the jaw). Stupidly I picked two objects of the same colour so it is a little hard to make out where they cross over. But one very obvious spot is the tongue on the close dragon. It almost disappears totally revealing some of the dragon behind.

 

Conclusion 

It works! Well, sort of anyway. There are still some little snags here or there that I believe could be worked out with some more work. However, the deadlines get closer and I feel I have done enough to at least prove that the effect is possible with the help of deep G-Buffers. The one key issue here is the importance of the minimum separation between layers. With other effects, the issues aren’t obvious as we are not looking directly at second layer information. However, with this effect the second layer data is clearly visible so if the minimum separation is too big and selects the wrong object there will be some clear discontinuities in the result. In the same way , minimum separation may catch on to the faces of internal faces for example the faces of the inside of a cup could blend with the outer face as it is blurred. This is an issue faced by other methods that have looked at partial occlusion so not really something to worry about to much but worth noting.

Anyway, this time I am certain that this is the last piece of practical work I will do. From now on I will be focusing purely on the written work. Now, at least, a little happier that I am not just copying someone else’s paper.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s