Sunday, March 16, 2014

Blog 8

Depth of Field

Depth of field is the effect of blurriness in select regions of a view based on distance. This effect is caused by the way our eyes work. Unlike in the pinhole camera model; where a ray of light will directly reach the retina, our eyes require the ray to travel through a lens to reach the retina. If the ray directly reaches the retina, it is in the plane of focus and appears sharp. Rays travelling through the lens that are outside of the focal plane are blurred and make up what is called the circle of confusion.

Like motion blurring, depth of field can be used to direct the viewer’s attention to specific objects in a scene.



Accumulation Buffer

There are many varieties of ways in which to implement depth of field. One simple implementation uses the accumulation buffer. The basics of this technique require rendering the scene multiple times with different camera offsets, pixels in the scene are then represented by regions of in focus, close - foreground and far - background. The important part of these renderings is that the camera will have the same point of focus, as such the cumulative offsets that are blended together in the accumulation buffer will have the focal point sharp and the outer regions blurred – in a way motion blurred. This implementation is said to create the most accurate and real-time depth of field, however the number of passes required gives it a sizable performance hit.

Layering

This method is also known as the 2.5 dimensional approach. The scene is rendered only with specific objects part of their respective region of focus. Depth is used to determine this. So a scene is rendered with only objects in focus, objects out of focus in the background and objects out of focus in the foreground. These 3 renderings create 2D images that can be composited together from back to front. The blurry background is added followed by the in focus objects and lastly the blurry foreground objects. The problem arisen from this technique lies with objects being part of multiple regions; causing an unnatural transition between regions.

Reverse-Mapping

Another technique does depth of field by pixel. During the initial rendering of the scene, the depth value of each pixel is passed. This depth value is used with pre-calculated distances to identify which plane of focus a pixel belongs to. Based on where a pixel’s region of focus belongs to, the size of a filter kernel is determined. The filter kernel is a gather operation where pixels are sampled from within a circle region. Rather than sampling all the pixels in the circle, a Poisson-disk pattern is used; which also reduces artifacts. The larger the filter kernel, the blurrier the output pixel, pixels determined to be in focus will use a filter kernel the size of the pixel itself – resulting in only sampling the pixel itself. An additional step to enhance the final output would be to pre-blur the image and blend the blurred image with the original in the final stage.


Akenine-Möller, T., Haines, E., & Hoffman, N. (2008). Real-time rendering (3rd ed.). AK Peters. Retrieved from http://common.books24x7.com.uproxy.library.dc-uoit.ca/toc.aspx?bookid=31068.

 Demers, J. (2004). Chapter 23. Depth of Field: A Survey of Techniques. CPU Gems. Retrieved March 16, 2014, from http://http.developer.nvidia.com/GPUGems/gpugems_ch23.html

Scheuermann, T. (2004). Advanced Depth of Field [PowerPoint slides]. Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/

No comments:

Post a Comment