Home / Issues / 6.2009 / VR Based Visualization and Exploration of Plant Biological Data
Document Actions

VRIC 2008

VR Based Visualization and Exploration of Plant Biological Data

  1. Wolfram Schoor Fraunhofer Institute for Factory Operation and Automation (IFF)
  2. Felix Bollenbeck Fraunhofer Institute for Factory Operation and Automation (IFF)
  3. Thomas Seidl Fraunhofer Institute for Factory Operation and Automation (IFF)
  4. Diana Weier Leibniz Institute of Plant Genetics and Crop Plant Research (IPK)
  5. Winfriede Weschke Leibniz Institute of Plant Genetics and Crop Plant Research (IPK)
  6. Bernhard Preim Otto von Guericke University
  7. Udo Seiffert Fraunhofer Institute for Factory Operation and Automation (IFF)
  8. Rüdiger Mecke Fraunhofer Institute for Factory Operation and Automation (IFF)


This paper investigates the use of virtual reality (VR) technologies to facilitate the analysis of plant biological data in distinctive steps in the application pipeline. Reconstructed three-dimensional biological models (primary polygonal models) transferred to a virtual environment support scientists' collaborative exploration of biological datasets so that they obtain accurate analysis results and uncover information hidden in the data. Examples of the use of virtual reality in practice are provided and a complementary user study was performed.

  1. published: 2010-01-29


1.  Introduction

Three-dimensional contents open completely new possibilities to explore information, extract knowledge and deliver presentations in virtual environments. While three-dimensional and interactive representations of virtual prototypes are widespread in various domains of industry such as engineering and design, these kind of interactive 3-D models are rarely used for crop plant research. In the field of design, for example, the added value of 3-D information can already dramatically increase the effectiveness of work in the prototype design phase. Data visualization and exploration in virtual environments, based on such interactive 3-D models, likewise will become extremely important in crop plant research in the future. This is due to the fact that answers to key questions regarding functional relations are hidden in the 3-D structures or architecture of plants. New possibilities are enabling crop plant researchers to analyze scientific data in immersive environments. New insights in the analysis of biological data help optimize plant growth and crop yields. This is instrumental to improve farming and could contribute to reducing the problem of world hunger.

2.  Related Work

A brief overview of existing VR technologies and their features is followed by a short introduction to basic biomedical methods applied to data pools and concrete examples of VR applications development for plants or plant parts.

2.1.  VR Display Systems

The types of presentation in use are as diverse as the multitude of fields of application for data exploration and visualization with virtual interactive environments. Adapted from a classification applied by Biocca and Levy [ BL95 ] and based on [ Ber04 ], three basic forms of visual VR presentation exist:

  • Immersive VR: Users are completely or almost completely integrated in a virtual environment, e.g. CAVE or head-mounted displays (see [ Lan07 ] and [ CR06 ] for a survey). High acquisition costs and locational constraints on the technology have made its commercial use difficult in the past. Providing immersive VR as service to small and medium-sized enterprises would make such solutions financially more attractive, especially when a larger group of users can use these systems simultaneously. Figure 1 presents an example of a visual grain model inspection.

    Figure 1. Interactive visualization of three-dimensional biological models in a CAVE.

    Interactive visualization of three-dimensional biological models in a CAVE.

  • Semi-immersive VR: Output imply devices such as vision stations, (stereoscopic) wall screens with up to 220 megapixels such as HIPerSpace (www.ucsd.edu) or single chip stereo projectors [ HHF07 ] are filling most of the field of vision. This form of presentation generates a high level of user immersion and may be used by small user groups. Declining hardware prices will make it increasingly important in the near future. Figure 2 presents a semi-immersive display, the "immersive engineering workstation" with a biological scenario.

    Figure 2. Visualizing grain structures in a semi-immersive presentation.

    Visualizing grain structures in a semi-immersive presentation.

  • Desktop VR: The presentation only fills a certain region of the field of vision (typically less than half of the field of vision, see Figure 3). 3-D displays such as Free2c (www.hhi.fraunhofer.de) are also possible. This presentation generally takes the form of a single user system for continuous use.

    Figure 3. Conventional desktop VR with an active stereo system.

    Conventional desktop VR with an active stereo system.

For additional information on VR display technologies see [ Hof05 ].

2.2.  Related Biological Work

Three-dimensionally resolved image data, traditionally on a macroscopic scale in medical applications, have become more and more available to life sciences and biology on microscopic or even molecular resolution, often delivering stunning insights into structures and function. The observation of structure and function in its natural spatial or spatio-temporal extension is not only more intuitive to investigators, but often elucidates the underlying principles of dynamics and mechanisms in living matter, e.g. [ JLM08 ] deliver impressive examples of proving assumptions about intra-cellular motor mechanisms by 3-D image analysis and visualization. While in medical applications a standard for imaging devices and acquisition exists, 3-D imaging in biology is highly application-specific. Non-destructive bio-imaging methods such as laser scanning microscopy, X-ray or magnetic resonance imaging have been employed to generate models of insects [ PH04 ], [ KHKB07 ] and plants [ Gli06 ], [ BMH00 ] on microscopic scales, either by direct intensity-based visualizations or further processing such as tissue or organ labeling. Destructive methods overcome limitations in resolution and sample depth by serially sectioning the object allowing analysis of histological structure and functional data, such as gene-specific expression levels [ RBB94 ], [ GDB07 ], with comprehensive demands in image reconstruction on the downside.

While sample preparation and wet-experiments are labor- and resource-intensive, careful analysis and visualization of biological 3-D images has become more and more important in the past. Standard or device-shipped software often seems to provide insufficient functionality for specific tasks, resulting in an increasing number of published software and comprehensive libraries for biomedical image analysis and visualization.

A small number of VR applications are in use in plant and crop research. While they are based on VR technologies, they employ different approaches and have different foci. Based on their applications, VR technologies for plant biological objects can be subdivided into the following categories:

  • 3-D model creation using plant or plant part modeling techniques [ DHL98 ] and [ LD98 ] to design realistic plants for rendering.

  • 3-D model acquisition by laser scanning digitalization techniques as described in [ MBST06 ] to acquire high-resolution geometrical and color information about the plant phenotype.

  • Plant or plant part growth simulation is a well established field of research [ XZZ06 ] and [ DVMVdH03 ]. The purpose of this research area is to optimize plant growth.

  • Plants or plant part analysis is especially applied to identify regions and formations that control crop growth and thus the plant itself. VR is mainly used to visualize the results of an analysis [ GDB07 ].

While the field of biomedical data research has a variety of feasible solutions at its disposal, the field of plant biology research unfortunately still does not. Hence, algorithms and VR technologies have to be adapted for specific applications.

3.  Model Generation

Knowledge of real life objects and specific characteristics of imaging have to be considered to understand the requirements for the whole workflow from data acquisition to the biologists' conclusions. This process can be summarized as a model generation pipeline and is presented in Figure 4. Optional and alternative work steps are indicated by a dashed line.

Figure 4. Processing workflow: The graph shows the interplay of different modules in reconstruction and abstraction towards high-resolution 3-D models. Dashed lines indicate optional processing steps, such as image stitching. The three main problem domains preprocessing, segmentation and surface reconstruction are boxed.

Processing workflow: The graph shows the interplay of different modules in reconstruction and abstraction towards high-resolution 3-D models. Dashed lines indicate optional processing steps, such as image stitching. The three main problem domains preprocessing, segmentation and surface reconstruction are boxed.

3.1.  Data Acquisition and Storage

Biological material was generated for visual analysis, specifically microtome serial section data of different stages of barley seed development (days after flowering or DAF) for high resolution histological images similar to the method described in [ GDB07 ]. Each dataset consists of approximately 2.000 slice images of a developing barley caryopsis, which were obtained from a microtome at 3μm slice thickness. Digitized with a color CCD camera, images originally measured 1600 x 1200 pixels at a spatial resolution of 1.83 μm x 1.83μm per pixel. Since color-space analysis revealed an almost linear correlation and a limited range, it was possible to reduce the optical resolution of 12 bits for each RGB channel to 8 bit grayscale images without significant loss, reducing the data volume for each dataset to approximately 5 GB. Depending on the developmental stage of the specimen, sections can be too large to be captured in a single frame with the given spatial resolution and imaging setup. As a result, the mosaic images are composed from several sub-image scans. Since sub-image acquisition is not performed in a fixed arrangement, the restored image is computed based on image contents using a phase-correlation-based algorithm (see [ Bra65 ]) and multi-resolution blending of image intensities proposed in [ SHC04 ].

Available data include gene expression assay data utilizing mRNA in-situ hybridization (ISH) probing of histological cross-sections. Whereas in-situ data allows visualizing spatial gene expression patterns, staining with gene-specific samples requires complex chemical protocols so that the ISH image data originates from different individual grains.

Magnetic resonance spectroscopy measurements (1H-NMR) of caryopses at different points in time were sampled in a specific device. While being non-destructive,the detected proton distribution has a lower spatial resolution of approximately 10μm (isotropic) per voxel, and does not resolve histology or structural features (see Figure 5).

Figure 5. NMR imaging data of a developing barley grain, specifically two volume renderings of intensity voxel data from different angles and cutting planes. While NMR is a noninvasive imaging, it has a smaller (10μm/voxel) spatial resolution than light microscopy.

NMR imaging data of a developing barley grain, specifically two volume renderings of intensity voxel data from different angles and cutting planes. While NMR is a noninvasive imaging, it has a smaller (10μm/voxel) spatial resolution than light microscopy.

Other potential datasets, such as macroarray gene expression data or metabolomic assays by laser microdissection (LMD), could be integrated into the database to enhance synergy and facilitate the inference of analysis results. Figure 6 presents the database with different data domains and an example image. Figure 7 shows a pie chart breaking down data distribution per model. The organization is derived from the entity-attribute-value (EAV) design as in [ Anh03 ]. This approach has the advantage of high flexibility when extending the database with other data domains.

Figure 6. Schematic view: Integration of different data domains by using an EAV model for the visualization database.

Schematic view: Integration of different data domains by using an EAV model for the visualization database.

Figure 7. Data distribution for one model, the absolute data totaling around 9 GB.

Data distribution for one model, the absolute data totaling around 9 GB.

3.2.  Image Preprocessing

In general, data sampled from real world objects exhibits an inevitable level of noise and limited spatial and temporal resolution. Therefore, image processing and image analysis is necessary to enhance relevant information. Applications have many diverse requirements, and solutions for end users are often devoted to specific questions with little transparency on algorithms and parameters. While users with biological backgrounds frequently lack detailed knowledge of image processing algorithms and theory, consistent and comprehensible approaches are necessary for reproducible and exact results.

Eliminating artifacts, distortions and signal noise in sampled images, e.g. caused by preparation and handling during digitization, is not only beneficial in terms of generating'good' data for subsequent processing and visualization, but is often a preliminary for measurements of biological quantities in image data.

While the following categories are somewhat arbitrary, they nevertheless can be considered as prominent problems occurring in 3-D biomedical imaging.

The processing workflow for high-resolution 3-D models generation from digitized raw data follows the flowchart in Figure 4: While the principles behind the three main tasks preprocessing, segmentation and surface reconstruction are relevant for subsequent visualization and navigation, a detailed description of the process would go beyond the scope of this paper.

Since the full spatial resolution is conserved to the final surface-modeling, data amounts to be processed are large. The preprocessing of raw images includes image intensity normalization and noise removal. While adequate mapping of intensity values is computationally cheap, the exact delineation and masking of the object of interest (see Figure 9) is performed using geodesic active contours [ CKS97 ] in hierarchical image representation for faster convergence.

Image registration is the process of transforming two or more n-D images so that corresponding structures match, i.e. registering images onto a reference is a prerequisite to collecting and comparing structural features, and therefore a problem most frequently arising in biomedical image processing prior to any further analysis and visualization, as surveyed in [ MV98 ] and [ GB92 ].

Image registration is generally formulated as an optimization problem regarding an image metric, maximizing the correspondence of images based on image grid point intensities or extracted features such as edge information, employed in quadratic or correlation-based distance formulations. Desired properties, such as smoothness of the transformation etc. can be enforced by regularizing the objective function with additional terms evaluating the actual transformation (see [ Mod04 ] for an overview).

Registration based on the correspondence of unique points, i.e. landmarks is difficult since such landmarks are often ambiguous and hard to detect in biomedical images.

The choice of transformations is driven by the application, i.e. by the physical process underlying the distortion of images, necessitating different numerical schemes. This study employs registration techniques to reconstruct distorted serial section 2-D images to compose an intact 3-D intensity dataset and integrate image data from different imaging modalities in a 3-D model. Modersitzki [ Mod04 ] extensively discusses numerical schemes and applications.

The 2-D / 3-D reassembly of a serial section dataset requires registration of the whole image stack. Since in-core processing of arrays of several gigabytes in size is unfeasible, transformations are optimized within a local range of consecutive stack images as described in [ Mod04 ]. Resolution hierarchies allow fast and robust optimization schemes that combine exhaustive and gradient-descent searches, (see [ ISNC03 ] for details). The intact 3-D object is thereby restored composing a large voxel dataset showing internal histological structures (Figure 8).

Figure 8. Serial section data: The three-dimensional coherence of histological structures is reconstructed by registering the stack of section images. The figure depicts a volume rendering of the histological voxel data and virtual cutting planes.

Serial section data: The three-dimensional coherence of histological structures is reconstructed by registering the stack of section images. The figure depicts a volume rendering of the histological voxel data and virtual cutting planes.

3.3.  Image Segmentation

Segmenting images into homogeneous regions on the basis of application-specific criteria is a crucial step of image analysis.

Figure 9. Using automated segmentation algorithms a complete labeling of multiple biologically relevant tissues in images is obtained.

Using automated segmentation algorithms a complete labeling of multiple biologically relevant tissues in images is obtained.

All image segmentation algorithms are intended to decompose an image into segments with specific meaning for a given application [ HS85 ] or, expressed more technically, to identify regions that uniformly relate to a specific criterion such as texture or image intensity [ Hah05 ]. The literature, specifically [ LM01 ], [ Hah05 ] and [ Kan05 ], presents a variety of approaches to classification. As it relates to user interaction, here, image segmentation methodology is classified according to the following techniques: manual segmentation, semiautomatic segmentation, and automatic segmentation. Figure 9 shows an example of a complete segmentation of a barley grain slice with its tissue legend.

Manual segmentation: This is the easiest image segmentation technique and commonly applied to structures with widely varying morphologies. The user specifies a set of points that will be connected afterwards. The contour can additionally be smoothed by familiar interpolation methods [ Far88 ].

Since numerous segmentation points have to be specified manually, the main disadvantage of this technique is the necessary time and mental effort, and accordingly, these segmentation results are less reproducible in terms of inter-rater variability. An example of manual segmentation in the context of the authors work regarding spatial plant models is discussed in [ GDB07 ].

Voxel data is decomposed into subvolumes of specific tissues. The method described in [ BS08 ] enables automatically segmenting the full stack of approximately 2.000 images with a small set of reference images that a biologist has labeled manually or semiautomatically.

Automatic segmentation: Algorithmically addressing segmentation is generally superior over manual or semiautomatic segmentation for time-saving and non-subjectiveness. Although a large number of methods, e.g. graph-based [ BSC06 ] or variational formulations of the segmentation problem [ CKS97 ], [ TYJW03 ] have been proposed, biomedical image segmentation remains difficult. The non-uniqueness of features and structures often necessitates expert knowledge for correct identification and labeling. This particularly holds for histological image data in plant sciences, since here tissues in images discriminate on small-scale structures in their structural context due to cell walls (see Figure 8) rather than global properties, such as intensity. The authors have demonstrated in previous studies that incorporating expert knowledge is inevitable to achieve exact segmentation results (see [ BSS06 ], [ BS08 ]).

Segmentation algorithms incorporating expert-generated reference data as described in [ BS08, BWSS09 ] have proven to deliver high segmentation accuracy, especially in non-binary segmentation scenarios. In [ BS08 ] a registration-based segmentation algorithm is described that performs equally well with supervised classification schemes as support vector machines and neural approaches to segment serial section data into multiple classes based on a small set of user-generated references.

Semiautomatic segmentation: These approaches are helpful since automatic segmentation results may contain inaccuracies or applied algorithms may require a small set of reference images to solve a given problem. In some cases, it may be impossible to use automatic segmentation at all. User-steered semiautomatic segmentation algorithms can help to eliminate these drawbacks. Although a multitude of other semiautomatic segmentation algorithms exist, the LiveWire method [ MB98 ] and [ FUS98 ] provides the best method to postprocess only a few misclassifications and prepare a few reference segmentations. A modified LiveWire technique for fast and accurate segmentation of independent serial sections is presented in [ SBH08 ].

Semiautomatic segmentation algorithms ideally supplement automatic segmentation. The solution presented here is a hybrid approach consisting of automatic offline segmentation and the option for users to intervene with semiautomatic segmentation.

3.4.  Surface Reconstruction

For a level of abstraction higher than intensity stacks of images, boundaries of the labeled voxel data are converted into an explicit, geometric representation. The algorithm applied in Section 3.4 - Triangulation - generates triangle-mesh approximations of isosurfaces in one pass, but has the disadvantage of generating a large number of faces without adapting the geometry to the true isosurface. This makes a subsequent surface simplification necessary to reduce the number of faces (see Section 3.4 - Surface reduction and remeshing).

The literature supplies a variety of reconstruction methods, each with different drawbacks. Methods operating at the voxel level execute subvoxel resampling, use non-binary segmentation masks or apply segmentation with subvoxel accuracy.

Other data sources for surface reconstruction include point clouds resulting from the intersection of different models or contour lines as output from the semiautomatic segmentation. A number of existing solutions have already been implemented including the adaption of the parameters to the specific data characteristic. Commercial applications are also used to reconstruct the 3-D surface. The necessary or optional steps for surface reconstruction are described below.

Triangulation: Cell-based simple triangulation similar to the marching cubes algorithm proposed by Lorensen [ LC87 ] and variants [ LLVT03 ] have been implemented. The main drawbacks experienced so far are the immense number of triangles generated and problems with sampling in areas where sharp edges are expected. The power crust algorithm (see [ ACK01 ]) calculates a triangulated boundary representation of the input dataset, which is a structured or unstructured point cloud. The premises for good results are a dense point cloud (this criterion is generally fulfilled) and a smooth object surface (due to the origin of the data, a weak criterion which is not fulfilled every time). Moreover, the standard power crust algorithm is unable to handle large quantities of input data. Triangulation may also be based on proximate or quite proximate contour lines in an image stack, e.g. [ MBMB06 ] with an implicit curve fitting approach, or [ Kep75 ] who applies a 2-D directed graph search problem as isomorphic problem statement to solve the problem of 3-D triangulation of subsequent contour lines.

Surface reduction and remeshing: The 3-D surface data of plant biological models may consist of a very large number of small triangles (80 million triangles or more per model). The surface reconstruction algorithms (e.g., marching cubes) tend to result in oversampled representations in most regions of the triangulated structure. Eliminating extraneous data necessitates a surface reduction as in [ SZL92 ] and [ GH97 ]. Figure 10 presents a grain model with different mesh resolutions. Figure 10(b) contains 16% of the number of triangles in Figure 10(a). A remeshing step can be advantageous for a well distributed surface representation especially if the surface data is used for simulation purposes (see Section 4.2).

Figure 10. Surface models of one grain at different resolutions (a) left, a grain with a mesh size of 250,000 triangles, and right, detail with highlighted faces (b) left, reduced mesh with approximately 40,000 highlighted faces, and right, detail with highlighted faces.

Surface models of one grain at different resolutions (a) left, a grain with a mesh size of 250,000 triangles, and right, detail with highlighted faces (b) left, reduced mesh with approximately 40,000 highlighted faces, and right, detail with highlighted faces.

Surface smoothing: In [ BHP06 ] a survey of mesh smoothing algorithms applied to data for medical applications, concerning efficiency, accuracy, and appropriateness of different smoothing strategies for surface models was presented. No such work has been done on plant biological data, which has its own idiosyncrasies from imaging, segmentation and surface extraction. At the moment, research is being done on handling staircases, i.e., small shifts caused by the rigid registration process, and oversegmented areas. Comparable surveys of surface smoothing can be found in [ BO03 ] and [ YOB02 ]. Figure 11 reproduces an automatically segmented grain model with triangulation, triangle reduction and remeshing shown from different perspectives.

Figure 11. A surface representation of multiple label isovalues is obtained by automatically segmenting histological section datasets into biologically relevant tissues. The extracted isosurfaces are smoothed and remeshed.

A surface representation of multiple label isovalues is obtained by automatically segmenting histological section datasets into biologically relevant tissues. The extracted isosurfaces are smoothed and remeshed.

4.  Application

Figure 12 highlights the major tasks of the application presented here. These include aspects of primary visualization, virtual extraction of volume with a deformation model as well as dissection and subsequent analysis of extracted physical tissue.

Figure 12. Application workflow: Model reconstruction and VR-based virtual extraction close the circuit from real-world modeling to the acquisition of new data by physical dissection.

Application workflow: Model reconstruction and VR-based virtual extraction close the circuit from real-world modeling to the acquisition of new data by physical dissection.

4.1.  Visualization of Biological Data

The main purpose of the visualization is to represent the acquired data in a way that a user understands intuitively [ MJM06 ]. The following two examples are representative for the multitude of visualizations of plant biological models. With the aid of VR technologies, these concrete examples produce an obvious value added for biologists.

Visualization of different data domains: One of the most import tasks for biologists is the comparison of data from different data domains. An applicable visualization can in this case provide added value, as shown in the following example. Using 4-D warping algorithms, as described in [ PSM08 ], virtual intermediate developmental stages can be generated to achieve a better congruence of data resulting from different individual grains. Figure 13 combines an NMR volume representation and a surface model representation of histological data.

Figure 13. Visualization of different data modalities: NMR-based voxel intensities are displayed by volume rendering and combined with a surface view of labeled histological data.

Visualization of different data modalities: NMR-based voxel intensities are displayed by volume rendering and combined with a surface view of labeled histological data.

Visualization of model differences: Since grain growth and size are subjects to inherent biological diversity, models representing morphological variances at a specific developmental stage are highly feasible, especially to integrate heterogeneous data sampled from different individual grains. [ BWSS09 ] analyzes variances and features common among individual grains at a specific point in time and quantifies them by compiling inter-individual stochastic atlases of grains, thus enabling the quantification and visualization of biodiversity in a specimen.

4.2.  Virtual Extraction

Once the user has visualized the desired dataset, an appropriate interface must be provided to define the volume of interest (VOI). This can be achieved by initially choosing a prototype from a model catalog or selecting the structure to be modified. The user can then interact with the 3-D model. An efficient and more intuitive way of doing this is in 3-D space with a 3-D interface (for example with the SpaceNavigator www.3dconnexion.de). Figure 14 presents the 2-D standard case of deformation feature use. Even with new innovative metaphors (see [ SSB08 ]), a 2-D view of the model modified by 3-D widgets with a 2-D mouse interface is in general not as intuitive and accurate as a direct 3-D modification, especially for beginners. The current deformation area is chosen through freehand region selection (OpenGL picking, see the red points in Figure 14). Mass-spring models, as described in [ Wit97 ] and [ BC00 ], are used for deformation purposes. Well distributed intraconnectivity and uniformly distributed triangles on the surface (see the wireframe model in Figure 14) ensure that the mass-spring model's behavior is stable and visually plausible [ LSH07 ]. Good intraconnectivity can, for example, be achieved by tetrahedization, as described in [ AMSW08 ]. Given the extremely high complexity of these 3-D plant models the deformation model was also realized based on extensive GPU support.

Figure 14. Modification of model data to determine the VOI.

Modification of model data to determine the VOI.

The VOI can be saved later (e.g. as a VRML-file) and used for the dissection.

4.3.  Dissection and Analysis

The VOI is extracted from physical objects with a technique patented by an industrial machinery manufacturer named Molecular Machines & Industries www.molecular-machines.com. This application will not be discussed further here.

Microdissection experiments allow quasi spatially localized assays based on small sample volumes dissected from a specimen, thus providing a 'snapshot' of the differently expressed genes and their regulation during development. The authors of [ TWS08 ] present a study of gene expression patterns in tissue-specific assays of developing barley caryopses utilizing 2-D-LMD preparation of fixated serial sections. Being tissue-specific, the results of the LMD assays can be reintegrated into the visualization process (see Figure 12) and integrated into other data modalities.

5.  Application Scenarios and Results

The following application scenarios, where VR techniques could be applied, were identified in the different stages of biologists' work. This list below contains a selection of scenarios for VR use in different stages of the pipeline.

  1. Segmentation postprocessing

  2. Collaborative data exploration and model validation

  3. Definition of the virtual model's VOI

  4. Biological models for marketing

Segmentation Postprocessing: Biological data segmentation is a multistage process with a classification accuracy of at least 90% when performed with the approach presented here. Optical revision not only requires incorporating the spatial information of the 2-D image, but also the visualization of the reconstructed 3-D model itself. Semi-immersive single user VR systems such as passive stereo with Infitec glasses www.infitec.net constitute a good method. Even better are autostereoscopic 3-D displays such as www.spatialview.com, which are well suited for the technicians' revision processes. Figure 15 shows an example of VR in use.

Figure 15. Segmentation postprocessing on an eye-tracked single user SeeFront 21” 3-D display from spatial view.

Segmentation postprocessing on an eye-tracked single user SeeFront 21” 3-D display from spatial view.

Collaborative Data Exploration and Model Validation: A major part of the proposed pipeline is the information exchange between different working groups in present working stages. Biological VR contents can be processed quite easily for collaborative work in the large-scale immersive environment of the 'Elbe Dom' described in [ SMM08 ] or in more technical detail in [ SMH07 ]. It is an excellent platform for combining different data domains in a visualization or linking annotations (text or files) to models. Figure 16 presents an example.

Figure 16. Collaborative review of new 3-D grain models and their components in a multi-user immersive environment (see [ SMM08 ]).

Collaborative review of new 3-D grain models and their components in a multi-user immersive environment (see ).

Definition of the Virtual Model's VOI: Computer-aided definition of 3-D volumes of interest is another scenario that was identified for non-conventional biological use. Biologists have the opportunity to define a virtual subset of a plant model which corresponds to the structure of a real life object of interest to them. Figure 17 shows an example of model deformation on a 3-D display.

Figure 17. Definition of 3-D VOI on a SeeReal Technologies 3-D display.

Definition of 3-D VOI on a SeeReal Technologies 3-D display.

Biological Models for Marketing: Being able to present current work to external audiences is of importance to most enterprises or organizations. A 3-D demonstration of the outcome of current work is always preferable to a conventional presentation and leaves a far more intense impression on the viewers.

6.  Experimental Evaluation

An experiment was conducted to evaluate the performance of selected VR techniques in the field of plant biological research. Subjects were split into two groups and required to perform three different tasks, a Navigation Task to evaluate navigation skills in a 3-D environment (speed and accuracy), an Exploration Task to analyze the speed at which subjects are able to locate regions of interest in a 3-D model and, finally, a Manipulation Task to identify suitable forms of presentation.

6.1.  Experimental Setup

Hardware Setup: The new SpacePilot Pro from 3Dconnexion was employed in the experiments to perform 6 DOF interactions (see Figure 18). A standard laser mouse from Microsoft served as 2-D input device (see Figure 18). A low-cost tracking system with an IR sensor (see www.nintendo.com) and IR LEDs (reflection angle larger than 70 degrees) was used for head tracking. The LEDs are mounted on (stereo) glasses. Figure 18 presents the setup with standard glasses and IR LEDs.

Figure 18. The different interaction devices: (left) SpacePilot Pro, (middle) standard mouse, and (right) head tracking.

The different interaction devices: (left) SpacePilot Pro, (middle) standard mouse, and (right) head tracking.

Software Setup: In order to obtain results that lend themselves to comparison, a virtual trackball navigation metaphor (see [ Hul90 ]) was applied to the head tracking navigation as well as the 2-D and the 3-D mouse navigation. Table 1 provides a brief overview of the implemented navigation.

Table 1.  Navigation control.

SpacePilot Pro

Spin Left

roll cap left

Spin Right

roll cap right

Spin Up

tilt cap forward

Spin Down

tilt cap backward

Zoom In

push cap

Zoom Out

pull cap

Roll Left

spin cap left

Roll Right

spin cap right

2-D mouse

Spin Left

LeftMouseButton + drag left

Spin Right

LeftMouseButton + drag right

Spin Up

LeftMouseButton + drag forward

Spin Down

LeftMouseButton + drag backward

Zoom In

RightMouseButton + drag forward

Zoom Out

RightMouseButton + drag backward

Roll Left

MiddleMouseButton + drag left

Roll Right

MiddleMouseButton + drag right

Head tracking

Spin Left

move head left

Spin Right

move head right

Spin Up

move head up

Spin Down

move head down

Zoom In

move head forward

Zoom Out

move head backward

Roll Left

tilt head left

Roll Right

tilt head right

Subjects: The experiments were conducted with two groups. The first group consisted of ten subjects (two female, eight male, researchers and university students) without any special background in biology. They all had moderate or substantial experience with 3-D techniques. The second group consisted of five subjects (two female, three male, all researchers) with backgrounds in biology. They all had little or moderate experiences with 3-D techniques. technique was chosen randomly.

6.2.  Navigation Task

In the experimental tasks the users employ the different techniques / controllers to approximately adjust their position, orientation and direction (equivalent to a virtual camera) to a set of specified positions (within a predefined radius) and / -or orientations (scalar product above a predefined threshold). A common navigation metaphor is applied for this task (see Table 1). The Navigation Task represents a typical and important function in the application.

Initial Conditions: Before a complex navigation test could start, the spin, zoom, and roll had to be evaluated separately to verify the subjects would obtain comparable results with the different controllers for each navigation metaphor subset (e.g., spin). For instance the zoom and roll was disabled for the spin evaluation. The sensitivity of the controlling devices was adjusted once before the start of testing. To ensure conditions were equal, no acceleration parameters were used (e.g. for the 3-D mouse). Ten users from both groups participated in the initial testing. Since the differences remained within an expected range (see Figure 25 in the Appendix), the subsequent navigation and exploration tests delivered results that are conducive to sound.

Procedure: Each subject was required to solve a set of eight subtasks twice by employing each of the three techniques (3-D mouse, mouse and head tracking). The sequence of controlling devices employed was altered randomly and every task had to be finished before the next controller was used. Before the start of a test, each controller's mode of action was explained and the subjects were given a brief preparation time (approximately two minutes) to familiarize themselves with it. Completion times and accuracy values, i.e. the average between the position accuracy (spin and zoom) and the roll accuracy, were measured for each subtask.

Experimental Results from the First Group: The first group's results of the complex navigation task (combination of spin, zoom, and roll) confirm the experimental results yielded by the initial navigation experiment. Subjects performed the complex task best with the mouse. The subjects' mean completion time using the mouse technique (derived from the averaged times of the subtasks) is half the mean completion time using the 3-D mouse (SpacePilot Pro) or the head tracking system. The standard deviation of the mean completion time for a subtask (2-D mouse) was far smaller than the standard deviation from the 3-D mouse but comparable to the standard deviation of head tracking. The accuracy of the tasks performed was best with the head tracking system but did not differ very much from the accuracy with the 3-D mouse. The accuracy with the 2-D mouse was worst (approximately 8-10% less accurate than 3-D mouse and head tracking). The results with the different techniques were as good as the results of the separate navigation analysis.

Figure 19. Results of the complex Navigation Task from the first group, which included subjects without any background in biology.

Results of the complex Navigation Task from the first group, which included subjects without any background in biology.

The 2-D mouse is the input device for nearly every 2-D and 3-D task. Given their familiarity with this device, the subjects were able to complete the task more quickly as the results in Figure 19 indicate. The simple virtual trackball metaphor supports this and allows fast and accurate navigation. On the other hand, fine tuning appeared to be more difficult (in relation to accuracy). Alternatively, the subjects may have achieved faster completion times at the expense of the accuracy. Since not all subjects changed their strategy at once to be faster or more accurate this may not be true. The standard deviation of the completion times was significantly smaller with head tracking than with the 3-D mouse. Obviously, head tracking is easy (or difficult) to use without prior experience. The simple head tracking concept is conducive to intuitive use head tracking. This hypothesis was corroborated by subjects' responses to an informal questionnaire.

Experimental Results from the Second Group: The results from the second group are correspond with those from the first group. The subjects were also able to complete the task fastest with the mouse (twice as fast as with the reference technique) but at the expense of accuracy. The standard deviations of all techniques were far higher than in the reference group.

Figure 20. Results of the complex Navigation Task from the second group, which included subjects with backgrounds in biology.

Results of the complex Navigation Task from the second group, which included subjects with backgrounds in biology.

The results presented in Figure 20 point to the same conclusion as the results from group one, taking into account that the pool of subjects is too small for statistical conclusions. The nearly constant navigation times with good accuracy values for head tracking in this group too reinforces the preceding assumption.

6.3.  Exploration Task

In a second evaluation scenario, concealed areas of interest in a typical 3-D plant biology were analyzed. Since the regions are represented by glyphs, subjects without a biology background are also able to localize them. The glyphs are only visible when the subject's position is located within a predefined range to the glyph, which is similar to a small detail of a model that is only recognizable up close. The alpha values decreases when the subjects' virtual position converges on the glyph. The roll is not relevant for this task.

Procedure: Each subject was required to solve an exploration task consisting of a set of twelve subtasks by employing each of the three techniques (3-D mouse, mouse and head tracking). The sequence of control devices was altered randomly just as the positions of the glyphs in the model. Every task had to be completed before the next controller was used. This experimental task built upon the Navigation Task. Before the start of a test, each controller's mode of action was explained and the subjects were given a brief preparation time (approximately one minute) to familiarize themselves with it. The completion times of each subtask were measured for this task.

Experimental Results from the First Group: 2-D mouse interaction was the fastest technique to complete the exploration tasks. Nine out of ten subjects performed bestwith the 2-D mouse, and the final subject (see Figure 21, user #4) was nearly just as fast with the 2-D mouse as with the other techniques. On average, users performed 50% faster with 2-D mouse interaction than with the 3-D mouse and head tracking. On average, subjects performed similarly with head tracking and the 3-D mouse. The head tracking results were only a little faster. Seven out of ten participants were faster or nearly as fast with head tracking interaction than with the 3-D mouse. The standard deviations of the averaged completion times of the subtasks were very high for the 3-D mouse as well as for the head tracking approach. Furthermore, timing differed among different users greatly.

Figure 21. Experimental results of the first group for the Exploration Task.

Experimental results of the first group for the Exploration Task.

Again, subjects displayed good results with the 2-D mouse for this task. The relative time differences of the tested devices do not vary as greatly as the average time differences of the subtasks obtained during the Navigation Task. The variations in the standard deviation were large because not all glyphs can be found with the same speed.

Experimental Results from the Second Group: The subjects achieved the best results with the 2-D mouse interaction technique (see Figure 22). On average, users performed approximately 66% faster with 2-D mouse interaction than with the 3-D mouse, and only 15% faster than with head tracking. Four out of five subjects were faster or nearly as fast with head tracking interaction than with the 3-D mouse. The standard deviations of the averaged completion times of the subtasks were higher than in the reference group. Furthermore, the timing differed among different users greatly.

Figure 22. Results of the Exploration Task from the second group.

Results of the Exploration Task from the second group.

The second group also displayed good results using the 2-D mouse for this task. The difference was not much greater than the timing differences of the averaged subtasks performed during the Navigation Task and the first group's timing. The timing for head tracking differed only marginally from the head tracking timing of the group with more 3-D skills.

6.4.  Manipulation Task

The last test scenario is intended to evaluate the influence of the visual representation (monoscopic / stereoscopic / shadow indication / stereoscopic with shadow indication) on the accuracy of a manipulation task being performed. This is an elementary task in the definition process of VOIs. The subjects were required to move a 3-D arrow in a precise direction indicating the main direction of the subsequent deformation (cf. the same principle in Figure 14). The stereo method utilized throughout the experiment was circular polarization (see [ Hof05 ]). Shadow indication was implemented by means of hard shadows (see [ Wil78 ]). The biologists defined an average variation of 5-10 degrees from the optimal orientation as the accuracy requirement for this task. The goal was to identify the forms of representation that meet this requirement.

Procedure: Each subject was required to solve the manipulation task consisting of a set of eight randomized subtasks by employing each of the representations (monoscopic, stereoscopic, shadow, and stereo with shadow). The sequence of forms of representation was altered randomly. Before the start of a test, each subject achieved a short briefing on the task to be performed (approximately one minute). The accuracy was measured for this task (angular deviation between target direction and adjusted direction).

Experimental Results from the First Group: As expected, the accuracy values of the stereoscopic approach were better than the monoscopic approach, but the differences between both approaches are not very significant (two degrees on average). The accuracy results of shadow indication were very good (on average twice as good as the monoscopic and stereoscopic approach). A combination of stereo vision and shadow indication produced better accuracy when the manipulation arrow was adjusted. The different users' accuracy values were virtually identical. The results of the Manipulation Task for the first group are presented in Figure 23 . The difference between the optimal manipulation direction and the adjusted direction is used to determine accuracy. Smaller angle values represent better accuracy. The last column avg stands for the average values of all participants of the group.

Figure 23. Evaluation results of the Manipulation Task for the first group.

Evaluation results of the Manipulation Task for the first group.

Experimental Results from the Second Group: The second group's results are comparable to the first group's. The stereoscopic approach was more important to the second group (difference between stereo and mono 12 degrees) than to the first group (difference between stereo and mono 2 degrees). The use of shadows to adjust the manipulation angle (7 degrees on average) is twice as effective as the stereoscopic approach and four times as effective as the monoscopic approach. The use of stereo techniques in addition to shadow indication produced higher accuracy among all the subjects in the second group. The average difference of one degree between the two approaches was marginal. On the other hand, the standard deviation with stereo and shadow is smaller. The results of the Manipulation Task from the second group are presented in Figure 24 .

Figure 24. Evaluation results of the Manipulation Task for the second group.

Evaluation results of the Manipulation Task for the second group.

6.5.  Experimental Discussion

After the user experiments, the subjects were asked for their subjective impression. Both groups (biologists and non-biologists) preferred the interaction technique of the standard mouse for the Navigation Task and the Exploration Task. Head tracking was rated far better than the 3-D mouse. All the subjects, including those with slow completion times, saw great potential in this approach. Head tracking was rated as the technique best suited for spin navigation. The mouse was rated best for the navigation components zoom and roll. However, the measured times for a task for one subject (intra-rater) and the average times among the subjects (inter-rater) varied greatly. The mapping of the navigation metaphor demonstrated that a standard device such as the mouse can perform better than the six-degree of freedom approaches evaluated.

Indeed, this is not always possible because of the inherent limitations of two dimensions on the mouse. The influence of stereoscopic viewing on the adjustment of the manipulation angle was more important for the second group (participants with little or no 3-D experience) than for the experienced participants. The first group's average deviation accuracy was 15 degrees with the monoscopic approach. The second group's was on 27 degrees. Almost all subjects in the first group reported that they automatically used the shading and silhouette information (cf. importance of depth cues [ WFG92 ]). This might be an indication that the results of monoscopic and stereoscopic accuracy did not differ very much in this group. A few participants reported that they did not use the arrow's shadow as indirect 3-D information. This might be an explanation for these subjects' comparatively poor accuracy values. The application of stereo techniques and shadow indication was demonstrated to meet biologists' requirements for this task. None of the subjects in the second group was able to fulfill the predefined accuracy requirement by using a stereo representation alone. Finally, while the application of stereo techniques demonstrably improves the adjustment of the manipulation angle, the shadow application with stereoscopic support does so much better and reliably.

7.  Conclusion and Future Directions

Once use scenarios had been identified, VR is now being used in process applications. Important steps have been taken toward the interactive analysis, visualization, and exploration of different data domains, especially for polygonal representation forms. Suitable VR techniques that provide biologists with added value exist for different tasks in the model generation, visualization, exploration and analysis pipeline.

With its 360 degree laser projection system and support of collaborative interaction, the Elbe Dom impressively exemplifies the potentials of immersive VR. Nevertheless, desktop VR (e.g., with passive stereo) and semi-immersive VR applications are just as good for single user tasks.

The application scenarios presented here will have to be evaluated and expanded in the future. There is much need for research, especially in the field of efficient user input.

8.  Acknowledgements

This research is being supported by the BMBF grants PTJ-BIO/0313821B and PTJ-BIO/0313821A. The authors would like to thank Rainer Pielot (IPK Gatersleben, Plant Bioinformatics Group) for providing NMR data and Uta Siebert and Birgitt Zeike (IPK Gatersleben, Seed Development Group) for the fruitful discussions and sample handling and digitizing.


Figure 25. Initial evaluation of the accuracy and speed for the independent navigation components: (a) spin, (b) zoom, and (c) roll.

Initial evaluation of the accuracy and speed
for the independent navigation components: (a) spin,
(b) zoom, and (c) roll.


[ACK01] N. Amenta S. Choi, and R. K. Kolluri The power crust SMA '01: Proceedings of the sixth ACM symposium on Solid modeling and applications,  2001249—266New York, NY, USA ACM Ann Arbor, Michigan, United Statesisbn 1-58113-366-9.

[AMSW08] S. Adler R. Mecke M. Schenk, and C. Wex Hashbasierte Zerlegung von Tetraedernetzen curac.08 Tagungsband: 7. Jahrestagung der Deutschen Gesellschaft für Computer -- und Roboterassistierte Chirurgie e.V.,  2008 D. Bratz, S. Bohn and J. Hoffmann (Eds.) pp. 203—204 isbn 978-3-00-025798-8.

[Anh03] J. Anhøj Generic Design of Web-Based Clinical Databases Journal of Medical Internet Research,  5 (2003)no. 4e27issn1438-8871.

[BC00] D. Bourguignon and M.-P. Cani Controlling Anisotropy in Mass-Spring Systems Computer Animation and Simulation 2000,  2000pp. 113—123 Springer-Verlag isbn 3-211-83392-7.

[Ber04] M. Bernd Konzepte für den Einsatz von Virtueller und Erweiterter Realität zur interaktiven WissensvermittlungTechnische Universität Darmstadt, Fachbereich Informatik2004.

[BHP06] R. Bade J. Haase, and B. Preim Comparison of Fundamental Mesh Smoothing Algorithms for Medical Surface Models SimVis,  2006pp. 289—304 isbn 3-936150-46-X.

[BL95] F. Biocca and M. R. Levy Communication applications of virtual reality,  Communication in the age of virtual reality,  1995 Lawrence Erlbaum Associates, Inc. Mahwah, NJ, USApp. 127—157isbn 0-8058-1550-3.

[BMH00] S. Bougourd J. Marrison, and J. Haseloff An aniline blue staining procedure for confocal microscopy and 3D imaging of normal and perturbed cellular phenotypes in mature Arabidopsisembryos The Plant Journal,  24 (2000)no. 4543—550issn 0960-7412.

[BO03] A. Belyaev and Y. Ohtake A comparison of mesh smoothing methods In Proceedings of the Israel-Korea Bi-National Conference on Geometric Modeling and Computer Graphics,  2003 Y. O. A. & Computer,  pp. 83—87.

[Bra65] R. N. Bracewell The Fourier Transform and Its Applications1965 McGraw-Hill Publishing Company New Yorkisbn 9780070070127.

[BS08] F. Bollenbeck and U. Seiffert Fast registration-based automatic segmentation of serial section images for high-resolution 3-D plant seed modeling ISBI 2008. 5th IEEE International Symposium on Biomedicial Imaging: From Nano to Macro, 2008,  IEEE 2008pp. 352—355 isbn 978-1-4244-2002-5.

[BSC06] G. Bilodeau Y. Shu, and F. Cheriet Multistage graph-based segmentation of thoracoscopic images Computerized Medical Imaging and Graphics,  30 (2006)no. 8437—446issn 0895-6111.

[BSS06] C. Brüß M. Strickert, and U. Seiffert Towards Automatic Segmentation of Serial High-Resolution Images Proceedings Workshop Bildverarbeitung für die Medizin,  2006pp. 126—130.

[BWSS09] F. Bollenbeck D. Weier W. Schoor, and U. Seiffert From Individual Intensity Voxel Data to Inter-Individual Probabilistic Atlases of Biological Objects by an Interleaved Registration-Segmentation Approach,  Proceedings of the 4th International Conference on Computer Vision and Theory and Applications,  February  2009Vol. 1 A. K. Ranchordas and H. Araújo pp. 125—129isbn 978-989-8111-69-2.

[CKS97] V. Caselles R. Kimmel and G. Sapiro Geodesic Active Contours International Journal of Computer Vision,  22 (1997)no. 161—79 issn 0920-5691.

[CR06] O. Cakmakci and J. Rolland Head-worn displays: a review Journal of Display Technology,  2 (2006)no. 3199—216issn 1551-319X.

[DHL98] Oliver Deussen Pat Hanrahan Bernd Lintermann Radomír Mech Matt Pharr, and Przemyslaw Prusinkiewicz Realistic Modeling and Rendering of Plant Ecosystems Proceedings of the 25th annual conference on Computer graphics and interactive techniques,  1998pp. 275—286 isbn 0-89791-999-8.

[DVMVdH03] P. H. B. De Visser L. F. M Marcelis G. W. A. M. Van der Heijden G. C. Angenent J. B. Evers P. C. Struik, and J. Vos 3D digitization and modeling of flower mutants of Arabidopsis thaliana International Symposium on plant growth modeling, simulation, visualization and their applications,  2003 Tsinghua University Press pp. 218-226isbn 7-302-07140-3.

[Far88] G. Farin Curves and surfaces for computer aided geometric design: a practical guide Academic Press Professional, Inc. 1988San Diego, CA, USAisbn 0-12-249050-9.

[FUS98] A. X. Falcão J. K. Udupa S. Samarasekera S. Sharma B. E. Hirsch, and R. de Alencar Lotufo User-steered image segmentation paradigms: live wire and live lane Graphical Models Image Processing,  60 (1998)no. 4233—260 issn 1077-3169.

[GB92] L. Gottesfeld Brown A survey of image registration techniques ACM Comput. Surv.,  24 (1992)no. 4325—376issn 0360-0300

[GDB07] S. Gubatz V. J. Dercksen C. Brüß W. Weschke, and U. Wobus Analysis of barley (Hordeum vulgare) grain development using three-dimensional digital models The Plant Journal,  52 (2007)no. 4779—790issn 0960-7412.

[GH97] M. Garland and P. S. Heckbert Surface simplification using quadric error metrics Proceedings of SIGGRAPH,  Los Angeles 1997pp. 209—216isbn 0-89791-896-7.

[Gli06] S. M. Glidewell NMR imaging of developing barley grains Journal of Cereal Science,  43 (2006)no. 170—78issn 0733-5210.

[Hah05] H. K. Hahn Morphological VolumetryUniversität Bremen2005.

[HHF07] A. Hopp S. Havemann, and D. W. Fellner A Single Chip DLP Projector for Stereo-scopic Images of High Color Quality and Resolution In Virtual Environments 2007 - IPT-EGVE 2007 - Short Papers and Posters,  2007pp. 11—26.

[Hof05] H. Hoffmann VR Concepts and TechnologiesInformation Society Technologies (IST), Intuition Virtual Reality2005.

[HS85] R. M. Haralick and L. G. Shapiro Image Segmentation Techniques Computer Vision, Graphics, and Image Processing,  29 (1985)no. 1100—132 issn 0734-189X.

[Hul90] J. Hultquist A virtual trackball,  Graphics gems,  1990 Academic Press Professional, Inc. San Diego, CA, USApp. 462—463isbn 0-12-286169-5.

[ISNC03] L. Ibanez W. Schroeder L. Ng J. Cates The ITK Software GuideKitware, Inc.2003isbn 1-930934-10-6.

[JLM08] K. Jaqaman D. Loerke M. Mettlen H. Kuwata S. Grinstein S. L. Schmid, and G. Danuser Robust single-particle tracking in live-cell time-lapse sequences Nature Methods,  5 (2008)no. 8695—702issn 1548-7091.

[Kan05] H. W. Kang G-wire: A livewire segmentation algorithm based on a generalized graph formulation Pattern Recognition Letters,  26 (2005)no. 132042—2051issn 0167-8655.

[Kep75] E. Keppel Approximating Complex Surfaces by Triangulation of Contour Lines IBM Journal of Research and Development,  19 (1975)no. 12—11issn 0018-8646.

[KHKB07] A. Kuß H.-C. Hege S. Krofczik, and J. Borner Pipeline for the Creation of Surface-based Averaged Brain Atlases Proc. of WSCG 2007 (full papers), 15th Int. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision, Plzen, Czech Republic,  2007Vol. 1pp. 17—24isbn 978-80-86943-98-5.

[Lan07] E. Lantz A survey of large-scale immersive displays EDT '07: Proceedings of the 2007 workshop on Emerging displays technologies,  2007 C. Cruz-Neira and D. Reiners (Eds.) pp. 1—7 New York, NY, USA ACM San Diego, Californiaisbn 978-1-59593-669-1.

[LC87] W. E. Lorensen and H. E. Cline Marching cubes: A high resolution 3D surface construction algorithm SIGGRAPH '87: Proceedings of the 14th annual conference on Computer graphics and interactive techniques,  1987New York, NY, USA ACM Press pp. 163—169isbn 0-89791-227-6.

[LD98] Bernd Lintermann and Oliver Deussen Modelling Method and User Interface for Creating Plants Comput. Graph. Forum,  17 (1998)no. 173—82issn 0167-7055.

[LLVT03] T. Lewiner H. Lopes A. W. Vieira and G. Tavares Efficient Implementation of Marching Cubes' Cases with Topological Guarantees Journal of graphics tools,  8 (2003)no. 21—15issn 1086-7651.

[LM01] L. Lucchese and S. K. Mitra Color image segmentation: a state-of-the-art survey Proceedings of the Indian National Science Academy,  67 (2001)no. 2207—222issn 0370-0046.

[LSH07] B. Lloyd G. Székely, and M. Harders Identification of Spring Parameters for Deformable Object Simulation IEEE Transactions on Visualization and Computer Graphics,  13 (2007)no. 51081—1094issn 1077-2626.

[MB98] E. N. Mortensen and W. A. Barrett Interactive Segmentation with Intelligent Scissors Graphical Models and Image Processing,  60 (1998)no. 5349—384issn 1077-3169.

[MBMB06] J. Marker I. Braude K. Museth, and D. Breen Contour-based surface reconstruction using implicit curve fitting, and distance field filtering and interpolation Proceedings of the International Workshop on Volume Graphics,  2006pp. 95—102.

[MBST06] R. Mecke D. Berndt W. Schoor, and E. Trostmann Generation of texturized 3-D models with high resolution using optical 3-D metrology Proceedings of the 7th Conference on Optical 3D Measurement Techniques: Applications in GIS, mapping, manufacturing, quality control, robotics, navigation, mobile mapping, medical imaging, animation, VR generation,  2006Vol. IIpp. 3—12isbn 3-9501492-2-8.

[MJM06] T. Munzner C. Johnson R. Moorhead H. Pfister P. Rheingans, and T. S. Yoo NIH-NSF Visualization Research Challenges Report Summary IEEE Computer Graphics and Applications,  26 (2006)no. 220—24issn 0272-1716.

[Mod04] J. Modersitzki Numerical Methods for Image Registration Oxford University Press 2004isbn 0198528418.

[MV98] J. B. A. Maintz and M. A. Viergever A survey of medical image registration Medical Image Analysis,  2 (1998)11—36issn 1361-8415.

[PH04] W. Pereanu and V. Hartenstein Digital three-dimensional models of Drosophila development Current Opinion in Genetics & Development,  14 (2004)no. 4382—391issn 0959-437X.

[PSM08] R. Pielot U. Seiffert B. Manz D. Weier F. Volke and W. Weschke 4D Warping for Analysing Morphological Changes in Seed Development of Barley Grains VISAPP (1),  2008 A. K. Ranchordas and H. Araújo (Eds.) INSTICC - Institute for Systems and Technologies of Information, Control and Communication pp. 335—340isbn 978-989-8111-21-0.

[RBB94] M. Ringwald R. A. Baldock J. Bard M. H. Kaufman J. T. Eppig J. E. Richardson J. H. Nadeau, and D. Davidson A Database for Mouse Development Science,  265 (1994)no. 51812033—2034issn 0036-8075.

[SBH08] W. Schoor F. Bollenbeck M. Hofmann R. Mecke U. Seiffert, and B. Preim Automatic Zoom and Pseudo Haptics to Support Semiautomatic Segmentation Tasks 16th WSCG 2008,  WSCG'2008 Full Papers ProceedingsPlzen, Czech Republic2008 V. Skala (Ed.) pp. 81—88isbn 978-80-86943-15-2.

[SHC04] M. S. Su W. L. Hwang, and K. Y. Cheng Analysis on multiresolution mosaic images IEEE Transactions on Image Processing,  13 (2004)no. 7952—959issn 1057-7149.

[SMH07] W. Schoor S. Masik M. Hofmann R. Mecke, and G. Müller eLBEDoM: 360 Degree Full Immersive Laser Projection System Virtual Environments 2007 - IPT-EGVE 2007 - Short Papers and Posters,  2007pp. 15—20isbn 978-3-905673-64-7.

[SMM08] W. Schoor S. Masik R. Mecke U. Seiffert, and M. Schenk VR Based Visualization and Exploration of Barley Grain Models with the Immersive Laser Projection System - Elbe Dom 10th Virtual Reality International Conference, Laval, France,  2008 S. Richir and E. Klinger (Eds.) pp. 217—224isbn 2951573073.

[SSB08] R. Schmidt K. Singh, and R. Balakrishnan Sketching and Composing Widgets for 3D Manipulation Computer Graphics Forum,  Proceedings of Eurographics 200827 (2008)no. 2301—310issn 0167-7055.

[SZL92] W. J. Schroeder J. A. Zarge, and W. E. Lorensen Decimation of triangle meshes SIGGRAPH '92: Proceedings of the 19th annual conference on Computer graphics and interactive techniques,  1992New York, NY, USA ACM pp. 65—70isbn 0-89791-479-1.

[TWS08] J. Thiel D. Weier N. Srenivasulu M. Strickert N. Weichert M. Melzer T. Czauderna U. Wobus H. Weber, and W. Weschke Different Hormonal Regulation of Cellular Differentiation and Function in Nucellar Projection and Endosperm Transfer Cells: A Microdissection-Based Transcriptome Study of Young Barley Grains Plant Physiology,  148 (2008),  no. 3,  1436—1452issn 0032-0889.

[TYJW03] A. Tsai A. Yezzi Jr W. Wells C. Tempany D. Tucker A. Fan E. Grimson, and A. Willsky A shape-based approach to the segmentation of medical imagery using level sets IEEE Transactions on Medical Imaging,  22 (2003)no. 2137—154issn 0278-0062.

[WFG92] L. C. Wanger J. A. Ferwerda, and D. P. Greenberg Perceiving Spatial Relationships in Computer-Generated Images IEEE Computer Graphics and Applications,  12 (1992)no. 344—51, 54—58issn 0272-1716.

[Wil78] L. Williams Casting curved shadows on curved surfaces SIGGRAPH Computer Graphics,  12 (1978)no. 3270—274issn 0097-8930.

[Wit97] A. Witkin An introduction to physically based modeling: Particle system dynamics ACM Siggraph Course Notes,  1997.

[XZZ06] F. Xiong X. Zhao,, and Y. Zhang 3-D Animation and Virtual Reality,  Chapter 6 Management and Decision Support Systems,  CIGR Handbook of Agricultural Engineering,  2006Volume VI Information Technologypp. 425—434.

[YOB02] H. Yagou Y. Ohtake A. Belyaev Mesh smoothing via mean and median filtering applied to face normals GMP '02: Proceedings of the Geometric Modeling and Processing — Theory and Applications,  Washington, DC, USA IEEE Computer Society 2002pp. 124—131isbn 0-7695-1674-2.



Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.

  1. Deutsch
  2. English