Sunday, March 30, 2014

Blog 10

In the interest of adding some a quick enhancement to the graphics of our game, I looked through some additional effects

Fog

Fog is the natural phenomenon in which water in the air obscures light, the further away, the more fog obscures the light reaching the viewer. Fog can be useful as a way to hide geometry and create a sense of depth and atmosphere. In old OpenGL fog can be achieved by just enabling it and setting various attributes. It is performed at the end of the pipeline, affecting the final pixels. With the use of shaders, variance and customization is available to create more interesting varieties of fog. To create a fog effect, the distance of a pixel to the camera is taken into account, the further away, the more foggy; this is akin to depth of field. There are many ways to create fog such as by using height or even particles so that the fog can be affected by light and shadow.

A simple implementation:
vec3 fogColour = vec3(0.722, 0.961, 0.949);
float z = gl_FragCoord.z / gl_FragCoord.w;
float density = 0.001;
float fog = exp2(-density * density * z * z);
fog = clamp(fog, 0.0, 1.0);

fragColour.rgb = mix(fogColour, diffuse, fog);

Light fog on distant ocean

For more advanced fog effects, the location of the light source can be taken into account to create regions in which fog is illuminated the closer it is to the light source – blend between the color of the fog and the color of light using the dot product of the light vector and the distance to the pixel (similar to diffuse lighting), do this on top of regular fog calculations.

God Rays, also known as light scattering and crepuscular rays is the effect of rays of light hitting small surfaces such as particles or fog and scattering. The resulting scatter of light creates a visible streak of light. God rays are a post-processing effect. Firstly, a vector is created from the light source to the screen space specific pixel in the fragment shader. This vector is then sampled along with different values of decay and weighing to determine its color.

To continue from last week’s blog, I’ve managed to implement a basic particle system rendered using the geometry shader. The important element on top of using it for smoke, clouds and ash in game is that it serves the purpose of creating in-game messages such as score updates and player instructions.



Quilez, I. (2010). Better Fog. [Web log post] Retrieved from http://www.iquilezles.org/www/articles/fog/fog.htm

Nguyuen, H. (2008). Chapter 13. Volumetric Light Scattering as a Post-Process. CPU Gems 3. Retrieved March 30, 2014, from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html

Sunday, March 23, 2014

Blog 9

Geometry Shader


The geometry shader is a programmable shader located after the vertex shader and before the rasterization stage. Where the vertex shader processes vertices, and the fragment shader fragments/pixels, the geometry shader processes primitives.  Depending on what the input is set to, the geometry shader takes in a single triangle, point, quad, etc… and can perform operations such as removing/adding or even changing them. The geometry shader can output a completely different primitive from the input.

The geometry shader can be used to create particles by receiving a point and outputting a quad representing the sprite of the particle. With this, all that is needed are the locations of where each particle should be located.

Another use for the geometry shader is for tessellation – increasing the number of polygons thereby making the model more detailed. With tessellation, a low-poly model can be loaded and altered into a high-poly model using just the geometry shader; saving on the costly overheads of complex models. With OpenGL 4, this can be done using the new shaders – the tessellation control shader and the tessellation evaluation shader.


Development Progress



It’s been a while since I gave an update on out game. Let just say things are progressing bit by bit. Loading in the island map and performing collision detection against it to determine player y position was more troublesome than anticipated; from implementing ray-triangle intersection to finding a way to interpolate between triangles without the jitter moving between triangles.  The terrain collision is performed by using the Möller–Trumbore intersection algorithm to cast a ray from the player straight down and checking if the ray has hit a triangle of the terrain. If the ray hits a triangle, the weighing of the vertices of the triangle need to be determined to calculate the y value of where the ray hit. This is done by using Barycentric interpolation which essentially uses the areas created by the triangle and the point of intersection to determine weighing.

Terrain Collision
With the new island level, and celshading applied to it, things are really coming together from a graphics point of view. Post-processing motion blur has also been implemented, but requires some tweaking. Gameplay however is an issue; at the moment there isn’t much to do other than to run around to reach the end. There are plans to have a stream of lava creeping behind the player and another possibility is having randomly falling debris from the eruption.

Screen Space Motion Blur
As we’re only about 2 weeks away from gamecon, there are still many features that should be implemented in game such as shadows, fluid effects for the ocean, etc... If there is one lesson to take away from the Monday lecture is that we need particles. From the screenshot, it is clear that the sky is lacking detail, not to mention that there should be dust clouds and smoke from the volcano erupting. The past lesson to take away is to try adding some of the Brutal Legend VFX. What would also be interesting is to have cartoony particle effects similar to Wind Waker.

Wind Waker Particles


Geometry Shader. (n.d). In OpenGL Wiki. Retrieved from https://www.opengl.org/wiki/Geometry_Shader

Hardgrit, R. (2011). Legend of Zelda: Wind Waker. [Screenshot] Retrieved from http://superadventuresingaming.blogspot.ca/2011/08/legend-of-zelda-wind-waker-game-cube.html

Rideout, P. (2010, September 10). Triangle Tesselation with OpenGL 4.0. [Web log post] Retrieved from http://prideout.net/blog/?p=48

Sunday, March 16, 2014

Blog 8

Depth of Field

Depth of field is the effect of blurriness in select regions of a view based on distance. This effect is caused by the way our eyes work. Unlike in the pinhole camera model; where a ray of light will directly reach the retina, our eyes require the ray to travel through a lens to reach the retina. If the ray directly reaches the retina, it is in the plane of focus and appears sharp. Rays travelling through the lens that are outside of the focal plane are blurred and make up what is called the circle of confusion.

Like motion blurring, depth of field can be used to direct the viewer’s attention to specific objects in a scene.



Accumulation Buffer

There are many varieties of ways in which to implement depth of field. One simple implementation uses the accumulation buffer. The basics of this technique require rendering the scene multiple times with different camera offsets, pixels in the scene are then represented by regions of in focus, close - foreground and far - background. The important part of these renderings is that the camera will have the same point of focus, as such the cumulative offsets that are blended together in the accumulation buffer will have the focal point sharp and the outer regions blurred – in a way motion blurred. This implementation is said to create the most accurate and real-time depth of field, however the number of passes required gives it a sizable performance hit.

Layering

This method is also known as the 2.5 dimensional approach. The scene is rendered only with specific objects part of their respective region of focus. Depth is used to determine this. So a scene is rendered with only objects in focus, objects out of focus in the background and objects out of focus in the foreground. These 3 renderings create 2D images that can be composited together from back to front. The blurry background is added followed by the in focus objects and lastly the blurry foreground objects. The problem arisen from this technique lies with objects being part of multiple regions; causing an unnatural transition between regions.

Reverse-Mapping

Another technique does depth of field by pixel. During the initial rendering of the scene, the depth value of each pixel is passed. This depth value is used with pre-calculated distances to identify which plane of focus a pixel belongs to. Based on where a pixel’s region of focus belongs to, the size of a filter kernel is determined. The filter kernel is a gather operation where pixels are sampled from within a circle region. Rather than sampling all the pixels in the circle, a Poisson-disk pattern is used; which also reduces artifacts. The larger the filter kernel, the blurrier the output pixel, pixels determined to be in focus will use a filter kernel the size of the pixel itself – resulting in only sampling the pixel itself. An additional step to enhance the final output would be to pre-blur the image and blend the blurred image with the original in the final stage.


Akenine-Möller, T., Haines, E., & Hoffman, N. (2008). Real-time rendering (3rd ed.). AK Peters. Retrieved from http://common.books24x7.com.uproxy.library.dc-uoit.ca/toc.aspx?bookid=31068.

 Demers, J. (2004). Chapter 23. Depth of Field: A Survey of Techniques. CPU Gems. Retrieved March 16, 2014, from http://http.developer.nvidia.com/GPUGems/gpugems_ch23.html

Scheuermann, T. (2004). Advanced Depth of Field [PowerPoint slides]. Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/

Sunday, March 9, 2014

Blog 7

Deferred Shading

Up until now we have been performing lighting calculations in our games using either the vertex or fragment shaders in the initial pass and then perform post processing effects; this is called forward shading. In forward shading, these properties are used immediately in the fragment shader to perform lighting calculations to output a single render target.

Forward Shading


The other technique is deferred shading. As it name implies, it post-pones the lighting to a later stage. From the fragment shader, the attributes used for lighting are passed to multiple render targets (MRT) of color, normal, and depth. These MRT make up what is known as the G-buffer or geometry buffer. Lighting is calculated as a post processing effect by sampling the render targets as textures.

Deferred Shading


In forward shading, each fragment is shaded as it goes and requires additional calculations for each light; the number of calculations needed, is determined by the number of fragments * number of lights.
As fragments can include those that are behind others, they can be overlapped. This creates inefficiencies if the fragment is not affected by a light or is overlapped later on when the final pixel is calculated; essentially all that effort spent for that fragment becomes wasted.

What is important to note in deferred shading is that irrelevant fragments have been culled before passing on to the G-buffer. The number of calculations needed are significantly less; number of pixels * number of lights. As you can see, the complexity of deferred lighting is independent and unaffected by the number of objects in a scene, where as in forward shading, the more objects there are, the more fragments and the slower the performance.

Additionally, with deferred shading, lighting can be optimized by determining a lights region of influence by using the available world space. Assuming the region of a point light as a sphere, a spot light as a cone and a directional light as a box, we can determine which pixels are affected and perform shading only on the pixels in the region of influence of a specific light. With this technique, many more lights can be rendered compared to that of forward shading. This is one of the prime advantages of using deferred lighting, to create scenes with many lights.

A disadvantage of deferred shading is its memory cost in storing the G-Buffers. The overhead of storing MRT can be a problem on some hardware, but as technology improves, this is becoming less and less of a problem. Antialiasing and transparency are also a problematic, they require a different approach. The techniques employed to overcome antialiasing and transparencies with deferred shading are inefficient and as such they may actually lower performance versus the forward shading approach.

Some advantages and disadvantages of deferred shading
  • Faster more efficient lighting calculations
  • More lights can be added
  • Lighting information can be accessed by other shaders
  • No additional geometry passes needed

Disadvantages of deferred shading
  • Memory cost
  • Antialiasing difficulty
  • Transparency done separately
  • Can’t be done with old hardware

Example of MRT used for deferred shading

Akenine-Möller, T., Haines, E., & Hoffman, N. (2008). Real-time rendering (3rd ed.). AK Peters. Retrieved from http://common.books24x7.com.uproxy.library.dc-uoit.ca/toc.aspx?bookid=31068.

Gunnarsson, E., Olausson, M. (2011). Deferred Shading[Seminar Notes]. Retrieved from http://www.cse.chalmers.se/edu/year/2011/course/TDA361/Advanced%20Computer%20Graphics/DeferredRenderingPresentation.pdf

Hogue, A. (2014, March 7). Deferred Shading [PowerPoint slides]. Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/

Nguyuen, H. (2008). Chapter 19. Deferred Shading in Tabula Rasa. CPU Gems 3. Retrieved March 9, 2014, from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch19.html

Valient, M. (2007). Deferred Rendering in Killzone 2. Guerilla Games. Retrieved from http://www.guerrilla-games.com/presentations/Develop07_Valient_DeferredRenderingInKillzone2.pdf



Sunday, March 2, 2014

Blog 6

This week was the beginning of midterms. Because of so, we only briefly covered image processing using different color models such as HSL/HSV and other techniques for shadow mapping. The topic of this post will be :

Motion Blur


What we’ve all seen in reality is what effect motion has on our view of the world. Due to the discrepancy of movement and how much detail our eyes can see, we will see streaking - blurring of details. We are unable to perceive every moment of time/frame when the movement of ourselves or the world are too fast – the result is motion blur. Although motion blur can be problematic for the capture of images and video, it can also be employed to create a sense of realistic motion as well as directing focus to the object in motion. In games motion blur also has the added benefit of padding a low FPS scene to perhaps hide jerkiness. Too much motion blur however can cause negative effects such as causing dizziness.

A fast way to implement simple motion blur in OpenGL is using the accumulation buffer. This buffer is similar to the back/front buffer in the sense that it can store images.  The accumulation buffer can store multiple images and blend them together using weights.

glClear(GL_ACCUM_BUFFER_BIT);
//draw scene here
glAccum(GL_ACCUM, 0.5);

//change scene
//draw scene here
glAccum(GL_ACCUM, 0.5);

glAccum(GL_RETURN, 1.0);

In the above, the accumulation buffer is first cleared per frame, and then the scene is rendered and passed to the accumulation buffer with a weighing of 0.5. Afterwards the scene is altered (perhaps offset by velocity) and also passed with the same weighing. There isn’t a set number to how many scenes you can pass other than hardware and performance limits. To output the scene you simply call return and assign the total weighing.

Motion Blur using the accumulation buffer with 4 draws:
Motion Blurred scene (static objects)


Motion Blurred particles


Additionally, the accumulation buffer can also be used to create antialiasing by offsetting with marginal values, or create depth of field using similar techniques to blur regions of the scene.The accumulation buffer isn’t very effective in truth, as such is better to implement motion blur using shaders and FBOs.

There are many ways to implement motion blur using shaders, the technique listed in CPU Gems as requirement for the homework assignment, makes use of depth. The technique is a post processing effect. In the second pass, the values of the depth buffer are passed to the fragment shader where by each fragment, is computed for its world space coordinate. These values are then used with the previous frame’s view projection to create a velocity vector from the current frame to the previous frame. It is with these velocity vectors as an offset that pixels of the framebuffer are sampled to create the motion blur effect.

Steps to motion blur using shaders


Nguyuen, H. (2008). Chapter 27. Motion Blur as a Post-Processing Effect. CPU Gems 3. Retrieved March 2, 2014, from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html

Tong, Tiying. (2013, September). Tutorial 5: Antialiasing and other fun with the Accumulation Buffer. [Lecture Notes]. Retrieved from https://www.cse.msu.edu/~cse872/tutorial5.html