ATI SDK


R2VB Skin Animation

 

One commonly used type of animation in computer graphics is skinning. In skinning animation, one needs to compute the animation matrices first, and then use these matrices to transform the vertices of the mesh. Before programmable shading hardware was introduced, it was common to compute bone matrices and then transform the vertices of the animated mesh on the CPU. With the advent of vertex shaders, applications could perform animation on the GPU using matrix palettes stored in the shader constants. Doing animation in vertex shaders is a common real-time rendering technique in modern games and 3D applications. In many cases, when performing animation on a GPU, vertex data can be preloaded to video memory just once and used whenever necessary for the animation. Avoiding dynamic vertex data transfers from system memory to video memory significantly improves animation performance. However, there are other bottlenecks and potential inefficiencies associated with this technique. First, although we doníŽt need to transform and blend vertices on the CPU anymore, we still need to compute animation matrices on the CPU. The second, and the most important bottleneck, is related to data transfers. Because we compute animation matrices on the CPU, we must transfer them to the GPU with the help of the SetVertexShaderConstantF() function. Passing constants to the GPU incurs a relatively large CPU overhead because it uses a lot of memory copy operations. Another big problem is the relatively small number of vertex shader constants available to store animation matrices. Because of this constant storage limitation, developers have to limit the number of animation matrices or split a model into several smaller meshes, each using a smaller animation matrix palette. The render to vertex buffer (R2VB) technique can solve all of the problems mentioned above. Using R2VB, developers can compute animation matrices on the GPU using pixel shaders. By storing animation data in textures, there is practically no limitation on the number of animation matrices1. Beside that, by moving the animation to pixel shaders, you will get the benefit of the greater computational power available in the pixel shader. For example, the Radeon X1900 chip has 48 pixel shader ALU units, compared to 8 ALU units in the vertex pipe. Skinned vertex blending requires many arithmetic operations, so computing it in the pixel shader is a very good idea.

Sometimes, developers use multiple rendering passes to achieve certain visual effects. Using the traditional vertex shader skinning animation method, the vertex blending will need to be executed for each of the rendering passes. This is another area where R2VB can offer a significant performance improvement. The R2VB technique allows developers to store the results of the animation in video memory. With this technique, vertex blending needs to be executed only once per frame, no matter how many times the animation is rendered. Rendering with shadow maps or with stencil shadow volumes is a good example of this. Using traditional methods, the animation will be computed in the vertex shader at least twice íV once for the rendered model and once for each time a shadow is rendered. Using the R2VB based animation technique you can compute the animation in the pixel shader just once and render the animated model as many times as you need.

Video : small
Video : large
White Paper

 

R2VB Dynamic Deformation

This sample demonstrates how to use the render to a vertex buffer (R2VB) technique for terrain deformation and height map based collision detection. Because R2VB allows the results from the pixel shader to be stored in a texture, we can keep the data on the GPU and do all processing there. In this sample we do real-time terrain deformation with dynamic mesh and height map based collision detection on the GPU without any assistance from the CPU.

Video : small
Video : large
White Paper

 

R2VB Shadow Volume Extrusion

These days, stencil shadow volume has become one of the mainstream shadowing techniques in real-time rendering. Although we can generate the shadow volume mesh and determine the silhouette at the loading time for static meshes, we still caníŽt determine the silhouette at the loading time for animated models, especially not for skinned animation models. This has forced developers to calculate skinning animation on the CPU side. However, with render to vertex buffer (R2VB) we can send data from the pixel shader back to the vertex shader, which gives us a chance to determine the silhouette on the GPU without any assistance of the CPU. The overhead of performing skinning animation computations on the CPU is pretty heavy, so putting everything onto the GPU ensures that the CPU can be freed for more gameplay enhancing computations such as AI or physics.

Video : small
Video : large
White Paper

 

R2VB Inverse Kinematics

This sample demonstrates how to use the render to vertex buffer (R2VB) technique for inverse kinematics. Because R2VB allows the results from the pixel shader to be fed into the vertex shader, we can keep the data on the GPU and do all processing there. In this sample we do hierarchy transformations and inverse kinematics calculations in the GPU without any assistance from the CPU.

Video : small
Video : large
White Paper