Physicians trained in cross-sectional imaging are able to understand the spatial relationship between tissue structures by mere exploration of cross-sectional slice images. However, for communication purposes it is often helpful to generate simulated views of isolated objects of interest from different viewpoints. 3D image processing allows deriving such objects from slice images and calculating virtual reality scenes which can be explored interactively.
Surface Rendering (SR)
One way to derive a virtual object is to track (segment) an organ boundary in the slice images, build a 3D surface from the contours, and shade the surface. The P3D tool supports such segmentations by applying thresholds and/or region growing, optionally restricted within volumes-of-interest. As a special option, the object can be colored by projecting the information of matched images onto its surface (texturing), even animated in time. This feature allows, for instance, visualizing the concentration changes of the NH3 perfusion tracer at the myocardium surface throughout a dynamic acquisition.
Volume Rendering (VR)
A different way to derive a virtual object is to take a certain viewpoint in front of the object, cast rays through the object and record the ray values when the rays pass a plane behind the object, thereby producing an image. The ray value recorded in the image plane is a combination of the values in all the pixels met along the way from the viewpoint to the image plane, thus the name Volume Rendering. Typically, the combination is just the sum of the pixel values, each multiplied by a certain weighting function, called opacity. The result depends heavily on the image values and the opacity function. There are very successful configurations available for contrast enhanced CT examinations which provide the illusion of a three-dimensional representation, especially if the viewpoint is interactively changed.
Skeleton Rendering (Path)
An additional way to derive a virtual object is to extract the "center-lines" of a 3D binary image generated by a segmentation algorithm. These lines are known as "paths" or "skeletons" and can efficiently represent a 3D object, like a SR or a VR one. The curve skeletons are well suited to describe tube-like anatomical structures, e.g. vessels, nerves, and elongated muscles. When a selected path is bound with an oblique plane, the plane is automatically placed perpendicular to the path direction. This feature facilitates the placement and the cutting of a vessel with an oblique plane at 90 degree angle.
Scene Generation in P3D
The virtual reality scenes in P3D are constructed by segmenting tissue structures from images, and rendering them using Surface, Volume or Skeleton Rendering techniques. The scenes can be interactively explored to understand the spatial relationships of the segmented tissue structures. To this end the scene can be rotated in any direction, zoomed, and objects obstructing the view to deeper ones can be temporarily hidden or set to a high degree of transparency. Furthermore, planes showing the original image data can be added, or volumes-of-interest (VOI). Meaningful renderings can be saved as a screen captures, or a movie of a rotating scene can be generated. Protocol files allow reconstructing a particular scene from the original data at any later time.
Combination of 3D objects from Matched Series
A unique feature of P3D is the ability to combine and manipulate different types of virtual reality objects (VRO) in one common scene, even when they stem from different studies. Once a scene has been created, it can be saved as STL format file (SR objects only). These files can later be loaded into P3D to continue scene exploration.
Note: Starting with PMOD v 4.3 the Save/Load procedures for VRML-scenes are not supported anymore.
In addition, the curve skeletons can be saved as paths (*.vec). These files can later be loaded in P3D to recreate the skeleton scene. Note that the 3D binary image segment used initially to create the skeleton is not required anymore.