My Master Thesis was about improving the visual quality of crowd characters at the Moving Picture Company in London, where focus was laid on mainly Pose Space Deformation (PSD). Several different PSD techniques were implemented and compared as well as an own localized technique. In order to remove the visual artifacts around joints, such as the collapsing elbow and the candy wrapper effect, a new method called Stretchable and Twistable bone skinning was also implemented. Furthermore, some research around creating Dynamic Pose Space Deformation based on joint accelerations was also done and is presented in this paper. Two methods were implemented to handle dynamic effects on crowd characters, such as fat bouncing from for example running and jumping.
Hair in computer graphics can be simulated and rendered with very high quality but at a cost of long rendering times. The artist will have to wait for a render to see if the hair looks good or not, which is time consuming. A better way would be to see how the hair would look like directly in the viewport. This report presents two GPU implementations of the Marschner and Kajiya-Kay shading model by using GLSL. A hair generation and interpolation method that uses centroidal voronoi tessellation to find the optimal guide curve position was implemented on the GPU using CUDA as well. All these models where combined into a Maya plug-in that renders the generated hair in real-time in the viewport.
We have implemented a PIC/FLIP fluid simulation method, which is a hybrid method that uses the best of the Lagrangian way and the Eulerian way of simulating fluids. The PIC/FLIP fluid simulation was implemented on the CPU and was multithreaded with OpenMP for speed. Improved blobbies was implemented to create a signed distance field to be able to create a mesh of the fluid with the marching cube algorithm.
Computer generated objects can be physically animated with collisions but with no sound. The sound needs to be added later to enforce the collision in visual effects and is done by recording a similar sequence or a total different sequence that sounds about right to the visual computer generated simulation. However by using the finite element method, vibrations and pressure differences from the deformed mesh can be extracted and used to calculate how the object would sound like when for example hitting the floor. Different materials can be accurately simulated by changing Young's modulus and density of the desired material.
The method triangulates point light-sources by using two light probes and one matted sphere used for estimating the world-positions of the three spheres. The method worked well for estimating light sources at distances up to 1.5 meters with an error of 10 percent of the distance. The error grows, as expected, with the distance so at 2 meters the error is about 35-40 percent of the distance, but with some more time and estimating area light sources instead, the error could probably be reduced.
This lab gave a good understanding of low-level camera calibration and image processing on raw-image data. such as camera-noise reduction with black level subtraction, debayering and autofocusing. The algorithm for autofocusing was implemented by looking at the response of a localized window thresholded with a Laplacian filter.
An Edge-direct interpolations method, for demosaicing was implemented and compared to nearest neighboring, bilinear and bicubic interpolation. The lab also handled multispectral imaging, where images were simulated to be lit under different lights and using a camera as a spectrophotometer.
The goal of this lab was to implement and understand the HDR algorithm. The camera curve was created using HDRShop and the images where calibrated using the camera curve. A weight function was implemented and by using the weighted mean of irradiance values from the different exposures a HDR image was assembled. For the HDR image to be displayed on the screen the HDR image was tonemapped.
For this lab assignment image based lighting was performed on a set of Light Stage images using omni-dircetional high dynamic range images of real world illuminations. The new lighting is calculated using the known positions of the light sources and the corresponding images. The vectors to the ligth source positions from the center of the Light Stage is transformed into latiude-longitude coordinates to be able to gather light information from the HDR environment map.
A two pass method for calculating pointbased ambient occlusion was implemented on the GPU using GLSL and OpenGL, in the course TNCG14 - Advanced Computer Graphics at Linkoping University. The program can load in wavefront-object files and render them with point-based ambient occlusion. The result a good approximation of a ray-traced ambient occlusion render but it is rendered with a greater speed than with a ray-traced method. A point-based rendering visualization method called ''Affinely projected point-sprites'' was also implemented and compared to the normal way of visualizing objects with triangles.
The Lab sessions where divided into three categories.
The data structure used was the half-edge mesh data structure which gives the ability to easily access neighboring faces. Curve subdivision was implemented and compared to an analytical cubic B-spline curve. Mesh subdivision was implemented using the Loop subdivision algorithm, and modified to handle adaptive subdivision using the mean curvature.
Quadratic implicit surfaces where implemented such as sphere, cone and ellipsoid. The implicit boolean operations erode, dilate and advection was implemented as well as super-elliptic blending for a higher continuity than C0. Parabolic diffusion and hyperbolic advection was implemented for the Level set method. Improvements were made by implementing the more accurate interpolation scheme Runge-Kutta , and a higher order spatial discretization scheme , WENO.
The basic functionalities for a stable Eulerian based fluid solver, based on the Navier-Stokes equation, were implemented which are External forces, Dirichlet boundary conditions and the projection step. The solver was then improved by adding vorticity confinement and forced volume preservation.
This method simulates short fur that moves with the movement of the object in real-time. By using shells textured with 2D simplex noise at different thresholds and shaded using kajiya-kays hair shading method a good approximation of realistic hair was simulated at interactive frame rates. To be able to make it more fur like and interesting voronoi noise was implemented as well to give stripes and dots to the fur. The result was realistic short fur that ran at about 30fps and above, where the fps did not depend on the amount of fur strands which usually is the case.
The project was about planning and doing all the steps needed for an advanced movie production, everything from synopsis to finished product. We had one day to rig up and plan everything in a green-screen studio and one day of shooting the film, so planning was essential. The postproduction was done in After Effects and premier and the CG-elements were done in Blender3D. The result was a short movie named A pawful of dollars which is about three bears planning a bank robbery of the Bank of New York.
The project was about creating an android game in an agile development environment with a group of 12 people. Daily scrums, burn down charts and sprints were heavily used in this project and was very useful for planning and dividing the work among us. The result was an android game called Blockz which is about moving stone blocks to certain pits to open doors to the next level.
A stochastic monte-carlo renderer was implemented on the CPU with photon maps for global illumination and caustics. The renderer handles perfect and diffuse reflections, refractions with caustics, global illumination with photon maps, implicit and polygonal surfaces and area light sources. We also included openMP to distribute the calculations over all CPUs for a significant speedup.
The Optical Music recognition program written in MATLAB can extract notes from scanned in note sheets as well as note sheets taken with a camera at different angles and return the corresponding accords.
The A*-algorithm is a good way of calculating a way to get to a specified target in an environment with different obstacles. It takes into account the cost of how difficult or how costly it is to go over certain objects and chooses the path with the lowest cost. In this example the target is bunch of bananas which the monkey desperately wants and crosses rivers and jungles to get to it.
In this project we simulated deformable objects in 2D by using a mass-spring system. For stability and speed the verlet integration method was implemented and yielded better stability to the mass-spring system. The mass-spring system handles fraction as well when the springs gets stretched too far. We used OpenGL for rendering of the mass-spring system and GTK+ for the GUI implementation.
V.I.K.I.N.G.S is a short movie created in Blender3D which is about three vikings on a voyage with their longship and sails right into the gap of a fierce sea monster and will have to fight bravely to survive.