Volume rendering techniques

Volume rendering techniques have been developed to overcome problems of the accurate representation of surfaces in the isosurface techniques. In short, these problems are related to making a decision for every volume element whether or not the surface passes through it and this can produce false positives (spurious surfaces) or false negatives (erroneous holes in surfaces), particularly in the presence of small or poorly defined features. Volume rendering does not use intermediate geometrical representations, in contrast to surface rendering techniques. It offers the possibility for displaying weak or fuzzy surfaces. This frees one from the requirement to make a decision whether a surface is present or not.

Volume rendering involves the following steps: the forming of an RGBA volume from the data, reconstruction of a continuous function from this discrete data set, and projecting it onto the 2D viewing plane (the output image) from the desired point of view. An RGBA volume is a 3D four-vector data set, where the first three components are the familiar R, G, and B color components and the last component, A, represents opacity. An opacity value of 0 means totally transparent and a value of 1 means totally opaque. Behind the RGBA volume an opaque background is placed. The mapping of the data to opacity values acts as a classification of the data one is interested in. Isosurfaces can be shown by mapping the corresponding data values to almost opaque values and the rest to transparent values. The appearance of surfaces can be improved by using shading techniques to form the RGB mapping. However, opacity can be used to see the interior of the data volume too. These interiors appear as clouds with varying density and color. A big advantage of volume rendering is that this interior information is not thrown away, so that it enables one to look at the 3D data set as a whole. Disadvantages are the difficult interpretation of the cloudy interiors and the long time, compared to surface rendering, needed to perform volume rendering.

We will describe two implementations of volume rendering: ray casting and splatting. These implementations are used in the four visualization packages we have compared (see chapter 3). The two methods differ in the way the RGBA volume is projected onto the 2D viewing plane.

Ray casting

Several implementations exist for ray casting. We describe the implementation used in Visualization Data Explorer. For every pixel in the output image a ray is shot into the data volume. At a predetermined number of evenly spaced locations along the ray the color and opacity values are obtained by interpolation. The interpolated colors and opacities are merged with each other and with the background by compositing in back-to-front order to yield the color of the pixel. These compositing calculations are simply linear transformations. Specifically, the color of the ray Cout as it leaves each sample location, is related to the color Cin of the ray, as it enters, and to the color c(xi) and the opacity a(x) at that sample location by the transparency formula :

Performing this formula in a back-to-front order, i.e. starting at the background and moving towards the image plane, will produce the pixel color. It is clear from the above formula that the opacity acts as a data selector. For example, sample points with opacity values close to 1 hide almost all the information along the ray between the background and the sample point and opacity values close to zero transfer the information almost unaltered. This way of compositing is equal to the dense-emitter model, where the color indicates the instantaneous emission rate and the opacity indicates the instantaneous absorption rate.


This technique was developed to improve the speed of calculation of volume rendering techniques like ray casting, at the price of less accurate rendering. We will not go into detail here as this technique is rather complicated. It differs from ray casting in the projection method. Splatting projects voxels, i.e. volume elements, on the 2D viewing plane. It approximates this projection by a so-called Gaussian splat, which depends on the opacity and on the color of the voxel (other splat types, like linear splats can be used also). A projection is made for every voxel and the resulting splats are composited on top of each other in back-to-front order to produce the final image.