Showing posts with label INFR2350. Show all posts
Showing posts with label INFR2350. Show all posts

Sunday, March 30, 2014

Blog 10

In the interest of adding some a quick enhancement to the graphics of our game, I looked through some additional effects

Fog

Fog is the natural phenomenon in which water in the air obscures light, the further away, the more fog obscures the light reaching the viewer. Fog can be useful as a way to hide geometry and create a sense of depth and atmosphere. In old OpenGL fog can be achieved by just enabling it and setting various attributes. It is performed at the end of the pipeline, affecting the final pixels. With the use of shaders, variance and customization is available to create more interesting varieties of fog. To create a fog effect, the distance of a pixel to the camera is taken into account, the further away, the more foggy; this is akin to depth of field. There are many ways to create fog such as by using height or even particles so that the fog can be affected by light and shadow.

A simple implementation:
vec3 fogColour = vec3(0.722, 0.961, 0.949);
float z = gl_FragCoord.z / gl_FragCoord.w;
float density = 0.001;
float fog = exp2(-density * density * z * z);
fog = clamp(fog, 0.0, 1.0);

fragColour.rgb = mix(fogColour, diffuse, fog);

Light fog on distant ocean

For more advanced fog effects, the location of the light source can be taken into account to create regions in which fog is illuminated the closer it is to the light source – blend between the color of the fog and the color of light using the dot product of the light vector and the distance to the pixel (similar to diffuse lighting), do this on top of regular fog calculations.

God Rays, also known as light scattering and crepuscular rays is the effect of rays of light hitting small surfaces such as particles or fog and scattering. The resulting scatter of light creates a visible streak of light. God rays are a post-processing effect. Firstly, a vector is created from the light source to the screen space specific pixel in the fragment shader. This vector is then sampled along with different values of decay and weighing to determine its color.

To continue from last week’s blog, I’ve managed to implement a basic particle system rendered using the geometry shader. The important element on top of using it for smoke, clouds and ash in game is that it serves the purpose of creating in-game messages such as score updates and player instructions.



Quilez, I. (2010). Better Fog. [Web log post] Retrieved from http://www.iquilezles.org/www/articles/fog/fog.htm

Nguyuen, H. (2008). Chapter 13. Volumetric Light Scattering as a Post-Process. CPU Gems 3. Retrieved March 30, 2014, from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch13.html

Sunday, March 23, 2014

Blog 9

Geometry Shader


The geometry shader is a programmable shader located after the vertex shader and before the rasterization stage. Where the vertex shader processes vertices, and the fragment shader fragments/pixels, the geometry shader processes primitives.  Depending on what the input is set to, the geometry shader takes in a single triangle, point, quad, etc… and can perform operations such as removing/adding or even changing them. The geometry shader can output a completely different primitive from the input.

The geometry shader can be used to create particles by receiving a point and outputting a quad representing the sprite of the particle. With this, all that is needed are the locations of where each particle should be located.

Another use for the geometry shader is for tessellation – increasing the number of polygons thereby making the model more detailed. With tessellation, a low-poly model can be loaded and altered into a high-poly model using just the geometry shader; saving on the costly overheads of complex models. With OpenGL 4, this can be done using the new shaders – the tessellation control shader and the tessellation evaluation shader.


Development Progress



It’s been a while since I gave an update on out game. Let just say things are progressing bit by bit. Loading in the island map and performing collision detection against it to determine player y position was more troublesome than anticipated; from implementing ray-triangle intersection to finding a way to interpolate between triangles without the jitter moving between triangles.  The terrain collision is performed by using the Möller–Trumbore intersection algorithm to cast a ray from the player straight down and checking if the ray has hit a triangle of the terrain. If the ray hits a triangle, the weighing of the vertices of the triangle need to be determined to calculate the y value of where the ray hit. This is done by using Barycentric interpolation which essentially uses the areas created by the triangle and the point of intersection to determine weighing.

Terrain Collision
With the new island level, and celshading applied to it, things are really coming together from a graphics point of view. Post-processing motion blur has also been implemented, but requires some tweaking. Gameplay however is an issue; at the moment there isn’t much to do other than to run around to reach the end. There are plans to have a stream of lava creeping behind the player and another possibility is having randomly falling debris from the eruption.

Screen Space Motion Blur
As we’re only about 2 weeks away from gamecon, there are still many features that should be implemented in game such as shadows, fluid effects for the ocean, etc... If there is one lesson to take away from the Monday lecture is that we need particles. From the screenshot, it is clear that the sky is lacking detail, not to mention that there should be dust clouds and smoke from the volcano erupting. The past lesson to take away is to try adding some of the Brutal Legend VFX. What would also be interesting is to have cartoony particle effects similar to Wind Waker.

Wind Waker Particles


Geometry Shader. (n.d). In OpenGL Wiki. Retrieved from https://www.opengl.org/wiki/Geometry_Shader

Hardgrit, R. (2011). Legend of Zelda: Wind Waker. [Screenshot] Retrieved from http://superadventuresingaming.blogspot.ca/2011/08/legend-of-zelda-wind-waker-game-cube.html

Rideout, P. (2010, September 10). Triangle Tesselation with OpenGL 4.0. [Web log post] Retrieved from http://prideout.net/blog/?p=48

Sunday, March 16, 2014

Blog 8

Depth of Field

Depth of field is the effect of blurriness in select regions of a view based on distance. This effect is caused by the way our eyes work. Unlike in the pinhole camera model; where a ray of light will directly reach the retina, our eyes require the ray to travel through a lens to reach the retina. If the ray directly reaches the retina, it is in the plane of focus and appears sharp. Rays travelling through the lens that are outside of the focal plane are blurred and make up what is called the circle of confusion.

Like motion blurring, depth of field can be used to direct the viewer’s attention to specific objects in a scene.



Accumulation Buffer

There are many varieties of ways in which to implement depth of field. One simple implementation uses the accumulation buffer. The basics of this technique require rendering the scene multiple times with different camera offsets, pixels in the scene are then represented by regions of in focus, close - foreground and far - background. The important part of these renderings is that the camera will have the same point of focus, as such the cumulative offsets that are blended together in the accumulation buffer will have the focal point sharp and the outer regions blurred – in a way motion blurred. This implementation is said to create the most accurate and real-time depth of field, however the number of passes required gives it a sizable performance hit.

Layering

This method is also known as the 2.5 dimensional approach. The scene is rendered only with specific objects part of their respective region of focus. Depth is used to determine this. So a scene is rendered with only objects in focus, objects out of focus in the background and objects out of focus in the foreground. These 3 renderings create 2D images that can be composited together from back to front. The blurry background is added followed by the in focus objects and lastly the blurry foreground objects. The problem arisen from this technique lies with objects being part of multiple regions; causing an unnatural transition between regions.

Reverse-Mapping

Another technique does depth of field by pixel. During the initial rendering of the scene, the depth value of each pixel is passed. This depth value is used with pre-calculated distances to identify which plane of focus a pixel belongs to. Based on where a pixel’s region of focus belongs to, the size of a filter kernel is determined. The filter kernel is a gather operation where pixels are sampled from within a circle region. Rather than sampling all the pixels in the circle, a Poisson-disk pattern is used; which also reduces artifacts. The larger the filter kernel, the blurrier the output pixel, pixels determined to be in focus will use a filter kernel the size of the pixel itself – resulting in only sampling the pixel itself. An additional step to enhance the final output would be to pre-blur the image and blend the blurred image with the original in the final stage.


Akenine-Möller, T., Haines, E., & Hoffman, N. (2008). Real-time rendering (3rd ed.). AK Peters. Retrieved from http://common.books24x7.com.uproxy.library.dc-uoit.ca/toc.aspx?bookid=31068.

 Demers, J. (2004). Chapter 23. Depth of Field: A Survey of Techniques. CPU Gems. Retrieved March 16, 2014, from http://http.developer.nvidia.com/GPUGems/gpugems_ch23.html

Scheuermann, T. (2004). Advanced Depth of Field [PowerPoint slides]. Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/

Sunday, March 9, 2014

Blog 7

Deferred Shading

Up until now we have been performing lighting calculations in our games using either the vertex or fragment shaders in the initial pass and then perform post processing effects; this is called forward shading. In forward shading, these properties are used immediately in the fragment shader to perform lighting calculations to output a single render target.

Forward Shading


The other technique is deferred shading. As it name implies, it post-pones the lighting to a later stage. From the fragment shader, the attributes used for lighting are passed to multiple render targets (MRT) of color, normal, and depth. These MRT make up what is known as the G-buffer or geometry buffer. Lighting is calculated as a post processing effect by sampling the render targets as textures.

Deferred Shading


In forward shading, each fragment is shaded as it goes and requires additional calculations for each light; the number of calculations needed, is determined by the number of fragments * number of lights.
As fragments can include those that are behind others, they can be overlapped. This creates inefficiencies if the fragment is not affected by a light or is overlapped later on when the final pixel is calculated; essentially all that effort spent for that fragment becomes wasted.

What is important to note in deferred shading is that irrelevant fragments have been culled before passing on to the G-buffer. The number of calculations needed are significantly less; number of pixels * number of lights. As you can see, the complexity of deferred lighting is independent and unaffected by the number of objects in a scene, where as in forward shading, the more objects there are, the more fragments and the slower the performance.

Additionally, with deferred shading, lighting can be optimized by determining a lights region of influence by using the available world space. Assuming the region of a point light as a sphere, a spot light as a cone and a directional light as a box, we can determine which pixels are affected and perform shading only on the pixels in the region of influence of a specific light. With this technique, many more lights can be rendered compared to that of forward shading. This is one of the prime advantages of using deferred lighting, to create scenes with many lights.

A disadvantage of deferred shading is its memory cost in storing the G-Buffers. The overhead of storing MRT can be a problem on some hardware, but as technology improves, this is becoming less and less of a problem. Antialiasing and transparency are also a problematic, they require a different approach. The techniques employed to overcome antialiasing and transparencies with deferred shading are inefficient and as such they may actually lower performance versus the forward shading approach.

Some advantages and disadvantages of deferred shading
  • Faster more efficient lighting calculations
  • More lights can be added
  • Lighting information can be accessed by other shaders
  • No additional geometry passes needed

Disadvantages of deferred shading
  • Memory cost
  • Antialiasing difficulty
  • Transparency done separately
  • Can’t be done with old hardware

Example of MRT used for deferred shading

Akenine-Möller, T., Haines, E., & Hoffman, N. (2008). Real-time rendering (3rd ed.). AK Peters. Retrieved from http://common.books24x7.com.uproxy.library.dc-uoit.ca/toc.aspx?bookid=31068.

Gunnarsson, E., Olausson, M. (2011). Deferred Shading[Seminar Notes]. Retrieved from http://www.cse.chalmers.se/edu/year/2011/course/TDA361/Advanced%20Computer%20Graphics/DeferredRenderingPresentation.pdf

Hogue, A. (2014, March 7). Deferred Shading [PowerPoint slides]. Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/

Nguyuen, H. (2008). Chapter 19. Deferred Shading in Tabula Rasa. CPU Gems 3. Retrieved March 9, 2014, from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch19.html

Valient, M. (2007). Deferred Rendering in Killzone 2. Guerilla Games. Retrieved from http://www.guerrilla-games.com/presentations/Develop07_Valient_DeferredRenderingInKillzone2.pdf



Sunday, March 2, 2014

Blog 6

This week was the beginning of midterms. Because of so, we only briefly covered image processing using different color models such as HSL/HSV and other techniques for shadow mapping. The topic of this post will be :

Motion Blur


What we’ve all seen in reality is what effect motion has on our view of the world. Due to the discrepancy of movement and how much detail our eyes can see, we will see streaking - blurring of details. We are unable to perceive every moment of time/frame when the movement of ourselves or the world are too fast – the result is motion blur. Although motion blur can be problematic for the capture of images and video, it can also be employed to create a sense of realistic motion as well as directing focus to the object in motion. In games motion blur also has the added benefit of padding a low FPS scene to perhaps hide jerkiness. Too much motion blur however can cause negative effects such as causing dizziness.

A fast way to implement simple motion blur in OpenGL is using the accumulation buffer. This buffer is similar to the back/front buffer in the sense that it can store images.  The accumulation buffer can store multiple images and blend them together using weights.

glClear(GL_ACCUM_BUFFER_BIT);
//draw scene here
glAccum(GL_ACCUM, 0.5);

//change scene
//draw scene here
glAccum(GL_ACCUM, 0.5);

glAccum(GL_RETURN, 1.0);

In the above, the accumulation buffer is first cleared per frame, and then the scene is rendered and passed to the accumulation buffer with a weighing of 0.5. Afterwards the scene is altered (perhaps offset by velocity) and also passed with the same weighing. There isn’t a set number to how many scenes you can pass other than hardware and performance limits. To output the scene you simply call return and assign the total weighing.

Motion Blur using the accumulation buffer with 4 draws:
Motion Blurred scene (static objects)


Motion Blurred particles


Additionally, the accumulation buffer can also be used to create antialiasing by offsetting with marginal values, or create depth of field using similar techniques to blur regions of the scene.The accumulation buffer isn’t very effective in truth, as such is better to implement motion blur using shaders and FBOs.

There are many ways to implement motion blur using shaders, the technique listed in CPU Gems as requirement for the homework assignment, makes use of depth. The technique is a post processing effect. In the second pass, the values of the depth buffer are passed to the fragment shader where by each fragment, is computed for its world space coordinate. These values are then used with the previous frame’s view projection to create a velocity vector from the current frame to the previous frame. It is with these velocity vectors as an offset that pixels of the framebuffer are sampled to create the motion blur effect.

Steps to motion blur using shaders


Nguyuen, H. (2008). Chapter 27. Motion Blur as a Post-Processing Effect. CPU Gems 3. Retrieved March 2, 2014, from http://http.developer.nvidia.com/GPUGems3/gpugems3_ch27.html

Tong, Tiying. (2013, September). Tutorial 5: Antialiasing and other fun with the Accumulation Buffer. [Lecture Notes]. Retrieved from https://www.cse.msu.edu/~cse872/tutorial5.html

Sunday, February 23, 2014

Blog 5

With study week at an end, the midterm is just a day away. I will dedicate this post as a sort of light review on topics I’ve have not touched on in these blog posts.

It is important to distinguish the differences between modern and old opengl, as such understanding how that the GPU access data through buffers and processes it in parallel; with 1000s of cores vs. the 4 or so cores of the CPU. With shaders, the once fixed pipeline can now be accessed and programmed specifically to our needs; no longer do we have to rely on slow CPU based methods. To fully understand this, we need to understand the graphics pipeline.

Graphics Pipeline

Vertex data is passed though vertex buffer objects that are sent to the GPU. Setting up these VBOs allow us to access them in the vertex shader as input by specifying layout (location=attribute index) in vec3 pos. Other data can be passed via uniform variables. In the vertex shader, a location is passed along by specifying the value to gl_Position. The next stage, after each vertex is processed is the triangle assembly. In the triangle assembly, the vertex locations are used to create basic triangles by connecting groups of 3 vertex and connecting them together – this forms the full object passed into the vbo. In the next stage, the rasterizer goes through each triangle and determines whether to clip portions of it depending on if it visible in screen space. The remaining unclipped portions of the triangle are converted to fragments. If other data is passed along with the location (normal, color, uv) the rasterizer will also interpolate between the vertices of the triangle and assigns each fragments interpolate values of color, normal or uv coords. The last step is to process each fragment in the fragment shader. Here, we can are passed along values from the vertex shader and the fragment itself. In this shader we typically manipulate the color of the fragment to create different effects such as lighting, can sample values from a texture and map them using tex coords. When all is done, the data can be outputted in the same way as layout (location=attribute index) out vec4 color. Data is sent to an FBO where it can be drawn on screen or stored for later use (such as a second pass for post-processing effects).

Shadow Mapping

Although I made mention to how shadow mapping works in previous posts, I never went through in enough detail or make mention of the math behind it. After the shadow map is created from the first pass of the light’s perspective, the second pass is that of the screen’s perspective. From here, we need to map pixels from this space to that of the shadow map to compare whether a given pixel is occluded. What we have is the world to light matrix (LTW) and world to camera matrix/modelview (CTW), what we need is to transform a pixel from camera space to light space, so LTW CTW vC . But that doesn’t work as the from and to do not match LTW CTW v, what we need instead is LTW (WTC) vC  . To get (WTC), we need the inverse of the model view matrix CTW-1

Sunday, February 16, 2014

Blog 4

On our lecture on Monday, we had a look into the visual techniques employed in the game Brutal Legend. As the GDC talk was quite lengthy, I will mention what I thought was interesting and could be added to our game.

Brutal Legend has an art style focused towards looking like the cover art of the heavy metal albums of old. This art essentially looks apocalyptic with stormy environments and very dramatic scenery. To create these environments, Brutal Legend uses a lot of particle effects. To light these particle effects appropriately, they use “center-to-vert” normal. Center-to-vert normals have every triangle’s vertex normal facing the direct opposite direction to the center. This creates the illusion that a particle (2d plane with texture) to have a volume; appearing to have depth that is spherical. Using this technique, particles such as smoke, which would almost have a flat billboard like appearance, looks realistic; three-dimensional.



The sky in Brutal Legend is very dynamic, constantly shifting, changing and blending with many factors such as weather, time of day and location. The difference from the conventional method of using sky domes/skyboxes was not used here, instead giant particles emitted far into the distance was used as substitute. Layers of these sky particles were used to create the appearance of a realistic sky (layer for stars, moon, lightning and types of clouds.



In order to match the dramatic sky of the game, Brutal legend took to creating equally dramatic environments. It is interesting to see how sequentially, a dull looking environment is step by step enhanced by adding in the elements of sky, fog, shadows, lights and post-processing effects.



Development Progress 

This was quite a hectic week; a filmmaking and animation assignment due. I toiled away into the night working on the pre-visualization (prototype cutscene) that may ultimately be finalized and integrated in-game. I also took the effort of using textures and actually models rather than just blocked in primitives; as a result, created a scene very close to what we want our finished game to look like.





Other than that, there has been little progress as far as our game goes. Having to upgrade our first level from the dull maze, I needed to implement terrain. As you can see in the video, we have a map all ready for importing into our game; the issue is of course how to do collision with it. Using a height map will work though there is a cave in the map. We could also try to use ray-casting from the character to the terrain. Another option would be to use shaders…

This reading week, I plan to drudge deep into development and not idle; to work on integrating the new map, and doing a few homework questions in the process.



Demoreuille, P., Skillman, D. (2010). Rock Show VFX - The Effects That Bring Brutal Legend to Life. Double Fine Productions. Retrieved from http://gdcvault.com/play/1012551/Rock-Show-VFX-The-Effects

Saturday, February 8, 2014

Blog 3 

This week in INFR2350, we covered global illumination and shadow mapping in our lectures, and color manipulation in tutorials.

Global illumination is a more advanced lighting technique that takes into account the addition of reflected rays. It tries to simulate real-world lighting of having rays of light bounce off surfaces indefinitely. This of course comes with the cost of speed. While there are many varieties of techniques, such as radiosity and ray tracing, screen space ambient occlusion or SSAO is the most suitable for games. (at least until hardware improves further). SSAO is implemented in the pixel shader using the depth buffer to sample the rendered texture(first pass) of the screen. Using the surrounding pixels of a given point, occlusion is computed. Essentially, soft shadows are added to up the realism of the game.

Notice how the objects on the table are almost popping out of the image, almost like they are not actually on the table. 

In this second image, with ambient occlusion, the objects blend in with the shadows and feel consistent as part of the scene.

The other topic, which too lend to creating more realistic environments is shadow mapping. As the name implies, shadow mapping creates the appearance of shadows in game. As global illumination, there are many techniques to create them. The basic concept of shadow mapping requires to passes like SSAO, firstly the scene is partially rendered from the perspective of the light source(per light); only objects casting a shadow are rendered. The only thing needed from this pass is the depth. The data of the depth buffer is saved and used as the shadow map for the second pass. In the second pass, the scene is rendered from the perspective of the camera. The depth map from the first pass is then used to compare each pixel with to determine whether the given pixel is in shadow or out of shadow. 

A flaw with this technique is the visual artifacts of jagged shadows can be created due to the limitations of pixels mapping to texels. A simple solution is to blur or anti-alias the shadow. 

An example can be seen here:


Development Progress


Forgive the short length of this update, as my excellent time management skills have led to a drought of time; having to hectic challenge of completing 2 lengthy assignments.

Speaking of which, In animation and production we are to prototype a cutscene rendered in Maya and edited with Adobe Premier. This brings in to the question of how to implement this cut scene into our game. Browsing through the annals of the internet, the methods differ by either having the scene scripted in game and playing it in real time(quite challenging and redundant), or playback a prerendered video.  I am opting for the latter, though we would need to delve deep into Maya.This limits the option of integrating video playback with OpenGl/SFML, or an external library. Another perhaps more interesting option would be to playback the cutscene as a sequence of images on a texture.

Development for our game is advancing along bit by bit. The conversion to modern OpenGl is progressing; the primary concerns of VBOs and shaders are integrated and ready for use.  Transitioning away from old OpenGL perspective and transformations is another matter however… It requires overhauling our previous camera system and math libraries.



Hogue, A. (2014, February 3). Global Illumination [PowerPoint slides].
Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/

SGHi. (2011). Skyrim - NVIDIA Ambient Occlusion [Screenshot]. Retrieved from http://www.nexusmods.com/skyrim/mods/31/?

Shastry, A, S. (2005). Soft edge shadow comparison [Screenshot]. Retrieved from http://www.gamedev.net/page/resources/_/technical/graphics-programming-and-theory/soft-edged-shadows-r2193




Saturday, February 1, 2014

Blog 2

Before we proceed further, an introduction to our game. It is called Monkey Racers; a racing game situated on a remote volcanic island with a cartoony and vibrant vibe to it. Here is a screen from the game.


As the first month of this freezing semester comes to an end, the workload has gradually increased. What awaits us are two hectic months of development. At the very least the winter Olympics are approaching; too bad I don’t have cable.

This week in INFR2350, we covered post processing effects.

Post processing effects alter the final image. From a graphics point of view this means after all the computations of rendering, lighting, shadows and the like. The outputted 2d image is what would be sent to the screen per frame. Post processing effects are applied to this image to create some very interesting visual appearances; these effects of which are akin to Photoshop’s image manipulation.

To apply these effects, requires mathematical operations to be applied to pixels of the image. Pixels are multiplied with a matrix to create different effects. This matrix takes into consideration the values of adjacent and surrounding pixels to output an altered value for the source pixel. This matrix is called the convolution matrix or kernel/mask or filter.  It is important to note that the values in the matrix are to be normalized to total a value of 1.

GIMP, the free image editor (discount Photoshop) allows for the specification of your own convolution matrix to apply to an image. This makes it a nifty tool to easily see how different filters are applied in OpenGL. Using the screenshot from our game, we take to applying these post processing effects in GIMP.

The simplest of filters which simple blurs the image with a matrix of 1s is the box filter. A smoother blur that produces less artifacts is the Gaussian blur, which does so using Pascal's triangle distribution.


Box Blur
Gaussian Blur






When applied to our screenshot It produces the following image

Blurred


As it is not prominently visible, if we zoom up close we will notice a greater difference
Normal, Box Blur, and Gaussian Blur

Another interesting post processing effect is edge detection. Where given an edges in the image are essentially highlighted to appear as those objects have a sort of glow to them. This can commonly be seen in games to guide players to objects of interest. To achieve this effect Sobel filtering can be used. For optimization purposes, rather then a single pass, two passes of the filter are used; the same can be said of Gaussian blur. One pass for each axis (x,y) are used then merged to create the final image.

Sobel filter for X and Y

 The image produced from applying the 2 filters, notice how the outline of the fences, and monkey are more noticeable.



One of the major effects seen of late that AAA games have that is commonly a challenge for weaker graphics card is HDR Bloom. HDR stands for high dynamic range, which signifies the range of colors available. Under normal imaging standards, there are 256 colors available per pixel, what HDR does is simulate a much larger range. This is done by modern day cameras by taking pictures of different exposure levels to produce a combined image. The combined image is vibrant with a wide palette of colors, often seen as glowing and bright.

The HDR bloom effect is created in graphics by firstly creating a bright version of the scene, applying a blur filter to it, then combining this image with the original.

Highlights

Highlights + several passes of gaussian blur

Final HDR scene with (highlights + blur) + original image

With enough previews of the shader effects to add to the game, the challenge is in now implementing them...


Hogue, A. (2014, January 31). Fullscreen effects PostProcessing [PowerPoint slides].
Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/


Saturday, January 25, 2014

Blog 1

After much delay, here is my first blog of INFR2350 – Intermediate Graphics.

This entry will cover an overview of material of the past three weeks; namely VBOs, shaders and lighting.

VBO

In modern OpenGL, with shaders, we gain the ability to program directly to the gpu how our data is to be computed. Prior to touching on shaders, we need to know the difference in how to pass our data to begin with. Unlike old opengl where we interate through draws of glVertex, glNormal and glTexCoords, we only need to pass the data once and simple call draw when the object needs to be rendered. To achieve this, there are many ways to pass the data to the gpu  as a VBO, the gist of it is either pass arrays of vertices, normals and texCoords or to organize the data in a single array and pass that instead.

Shaders

Shaders are essentially small programs (like functions) run on the GPU. They are fed a number of inputs, performs calculations and then the output is passed further along the pipeline. What is important to note is that very small iterative pieces are passed one by one, not the entire object itself; for instance the properties of vertex location, normal, and texCoord of a pixel. The changes to the vertex are never returned as this would be both costly and unnecessary. As the vertices are processed individually (in parallel on the gpu), previous results are not to be relied on.

Lighting

Though we covered lighting models in intro to graphics, a brief review is needed to refresh dormant knowledge.

OpenGL uses the Blinn-Phong Lighting model to simulate realistic lighting. Under the Phong model, three color components are calculated and then added together to determine the color (specular, diffuse, and ambient)

Ambient is a global ambience where all light is of the same color, it can be thought of indirect light with no source.
where k is come coefficient for the intensity
L is the color of the light

Diffuse is lighting where color is scattered by the material in accordance to normal of the object

where k is come coefficient (representing material properties such as absorbed, non-reflected light)
l.n is the dot product between the incident ray and the normal,
also the cosine of the angle of normal to l
L is the color of the light

Specular is the shiny reflection of objects and light to viewer

Where k is come coefficient
Cosa  φ is a shininess coefficient  and φ is the angle of viewer and reflected ray
                Also the dot product of the reflected ray and the viewer r.v
L is the color of the light

In the Blinn-Phong model that OpenGl uses, a halfway vector is computed for optimization purposes. Instead of needing to determine the reflected ray, we use the halfway vector of l and v and sub in n.h in place of r.v.


Hogue, A. (2014, January 13). Intro to Shaders [PowerPoint slides].
Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/

Pazzi, W. R. (2013, December 2). Intro to Computer Graphics Review & Questions [PowerPoint slides]. Retrieved from UOIT College Blackboard site: https://uoit.blackboard.com/