Home / Issues / 11.2014 / Hands-Free Navigation in Immersive Environments for the Evaluation of the Effectiveness of Indoor Navigation Systems
Document Actions

GI VR/AR 2012

Hands-Free Navigation in Immersive Environments for the Evaluation of the Effectiveness of Indoor Navigation Systems

  1. Volker Settgast Fraunhofer Austria Research GmbH, Visual Computing
  2. Marcel Lancelle Fraunhofer IDM@NTU
  3. Dietmar Bauer Austrian Institute of Technology, Department Mobility, Dynamic Transportation Systems
  4. Markus Piff Austrian Institute of Technology, Department Mobility, Dynamic Transportation Systems
  5. Dieter Fellner ORCID iD Fraunhofer Austria Research GmbH, Visual Computing

Abstract

While navigation systems for cars are in widespread use, only recently, indoor navigation systems based on smartphone apps became technically feasible. Hence tools in order to plan and evaluate particular designs of information provision are needed. Since tests in real infrastructures are costly and environmental conditions cannot be held constant, one must resort to virtual infrastructures. This paper presents the development of an environment for the support of the design of indoor navigation systems whose center piece consists in a hands-free navigation method using the Microsoft Kinect in the four-sided Definitely Affordable Virtual Environment (DAVE). Navigation controls using the user's gestures and postures as the input to the controls are designed and implemented. The installation of expensive and bulky hardware like treadmills is avoided while still giving the user a good impression of the distance she has traveled in virtual space. An advantage in comparison to approaches using a head mounted display is that the DAVE allows the users to interact with their smartphone. Thus the effects of different indoor navigation systems can be evaluated already in the planning phase using the resulting system

  1. published: 2014-09-04

Keywords

1.  Introduction

Transportation hubs such as airports, train stations and other junctions of mass public transport become increasingly complex, potentially posing obstacles for the orientation of passengers. In particular older people as well as people with mobility restrictions rely on timely and effective provision of information in order to find their way easily and not incur unnecessary delays. They also have special requirements that typically are not in the main focus of static route guidance measures such as signage.

Increasingly, this information provision is accomplished using indoor navigation systems realized using smartphones to complement the static guidance systems. A testimony in this respect is the inclusion of indoor navigation in Google Maps 6.0 [ Vol14 ]. While the use of smartphones for indoor navigation is technically feasible today, there are no tools at present to test prior to implementation whether a suggested system is appropriate. Also comparing different systems with respect to their effectiveness is currently not possible. This is of particular interest as the main target groups such as older persons will not use a system unless its usage is intuitive and simple.

For financial reasons virtual infrastructures are necessary for such tests. In these virtual environments testpersons can be exposed to navigation tasks and their success in fulfilling these task using the navigation aid can be measured. A number of different virtual representations have been used in this respect ranging from desktop visualization to immersive virtual environments. It has been seen in experiments that a high degree of immersion is necessary to be able to draw valid conclusions (see [ BSS13 ] for a recent review in this respect).

In order to realize such immersive environments there are two options: Kretz et al. used telepresence conveyed via a head mounted display [ KHR11 ]. Here the participant walks normally and her motion is fed back to the virtual environment. In such a setting smartphone usage cannot be incorporated realistically.

The second option is to use a CAVE environment which attains the immersion via shutter glasses inside a room onto whose walls the virtual infrastructure is projected upon. Interaction with smartphones works as in reality in such a setting as the shutter glasses are no obstacle for natural vision.

Still, in this setting the question arises how the participants navigate in the model, as the represented model is typically larger than the CAVE and natural walking is limited to a short distance by the extension of the CAVE. Often this is accomplished using a joystick or a similar pointing device. However, such a hand-held device is not desirable when participants need to interact with the smartphone app or, when in a travel setting, they potentially carry a suitcase, a ticket or other items.

Additionally it is known that the degree of immersion increases with the level of physical effort necessary to navigate [ SUS95 ]. Also the perception of traveled distances is distorted if it is restricted to the visual sense.

Therefore, in this paper a method using the Microsoft Kinect for the navigation based on simple and intuitive walking related gestures is described. To perform the gestures, the participants walk on the spot. During this activity they can naturally interact with both the static as well as the mobile guidance systems in a realistic fashion. The paper extends the work presented in [ BSS13 ] by providing a more detailed description of the navigation method and the system calibration process.

As a further contribution of the paper, methods for automatic data collection and preprocessing are described in detail. Within the environment a number of different sensing options have been implemented in order to measure the reactions of the test persons including interaction with the smart phone, the path actually taken and video data potentially including eye tracking measurements. This allows the evaluation of the effectiveness of indoor navigation systems in the planning phase.

2.  Related Work

Many traveling techniques and navigation methods have been developed for immersive virtual environments. Obvious options like pointing based methods, often in combination with a joystick or a wand, were already described over two decades ago. Some of them are summarized by Ware and Osborne [ WO90 ]. More recent overviews categorize existing travel methods like the ones from Bowman [ BKH97 ], [ BKLP05 ] and Mine [ Min95 ].

Mechanical locomotion interfaces can be used, such as 1D or 2D treadmills or large hollow rotating spheres the user walks in. Among others, Iwata et al. developed several innovative locomotion interfaces like treadmills or moving tiles [ IYFN05 ]. CyberSphere [ FRE03 ] and Cyberwalk [ STU07 ] are special platforms that allow the user to walk within a virtual environment. While the former system uses the rotating sphere, the latter one employs balls which are actuated by a belt on a turntable. However, these locomotion systems require a huge mechanical effort and in practice are often more difficult to use than one would expect. In addition they still cannot reduce simulator sickness problems because the real physical and virtual visually perceived accelerations do not match.

Bourdot et al. use a stateless approach where different zones for the user's position are used for special behavior of the navigation [ BDA99 ]. Leeb et al. describe a simple VR navigation with a brain computer interface by measuring neural impulses with an electroencephalogram [ LSFP07 ]. However, the setup time is long, navigation is very limited and lengthy and physical motion is strongly restricted.

LaViola et al. describe multiple hands-free techniques for multiscale ground based walk navigation and also address the problem of a missing back wall in a CAVE [ JFKZ01 ]. By amplifying the mapping of the user's orientation, 360 degree views become possible. They also introduce a pair of special slippers for the navigation task. Their setup uses a magnetic tracking system and the user is required to wear a belt for the tracking of the waist in addition to head tracking.

Adamo-Villani et al. developed and evaluated a travel interface using stepping on a dance mat [ AVJ07 ]. Beckhaus et al. also used a dance mat and developed a chair interface for traveling [ BBH05 ]. This hands-free method of navigation is not very natural and the user is distracted heavily by choosing the correct floor button for traveling actions. These techniques do not cover the experiences of a physical movement with the possibility to precisely estimate the traveled distance.

Recent articles describe experiments with the Microsoft Kinect as natural user interface for CAVEs. For example Jung et al. use the Kinect in combination with a Nintendo Wii controller for traveling in virtual worlds [ JKS11 ]. Other techniques use hand and arm gestures for navigation and traveling tasks.

None of these approaches realize a true hands-free navigation that is intuitive, allows interaction with a smartphone and the carrying of suitcases and the like while at the same time allowing to estimate the traveled distance realistically. Therefore a new approach has been developed which is presented in detail in the following sections  [1].

3.  The Setup

3.1.  The DAVE

The Definitely Affordable Virtual Environment (DAVE) is an immersive projection room with three side walls and one floor projection [ FHH03LSF08 ]. The projection screens are 3.3 meters wide and 2.7 meters high (see Figure 1). Stereoscopic shutter glasses are used, similar to the ones also known from 3D TV sets or 3D cinemas. In addition, an optical head tracking system allows a correct parallax and an undistorted view for the main user. The user can simply walk around an object to see it from all sides. A big advantage compared to most head mounted displays is the very wide field of view. Such a CAVE provides the most visually convincing immersive experience.

Figure 1. Scheme of the architecture of the Definitely Affordable Virtual Environment (DAVE). Large mirrors are used to minimize the required space for the setup. Schematically: tracking and cameras (turquois), projector synchronization (magenta), video signal (blue) and network (green).

Scheme of the architecture of the Definitely Affordable Virtual Environment (DAVE). Large mirrors are used to minimize the required space for the setup. Schematically: tracking and cameras (turquois), projector synchronization (magenta), video signal (blue) and network (green).

However, natural walking is very limited due to the small room size and no haptic feedback is available. In order to explore a larger 3D world, navigation or so called travel techniques as mentioned in the last section are necessary. By mostly using standard hardware components the system as well as its upgrades over time are cost effective. Large mirrors are used to fold the light paths from the projectors to the screens in order to minimize the necessary room size. In order to track objects or the user's head, multiple reflective markers are rigidly attached to objects or the glasses.

These passive markers are detected by multiple cameras and their position is computed by triangulation. Since identical markers are used identification is only possible with heuristic estimations or a fixed constellation of markers, called 'target' below. At least three markers are necessary to compute all six degrees of freedom of an object.

In our current setup the cameras are attached above the screen and the limited field of view also restricts the possible tracking volume. It is mainly used and optimized for head and hand tracking and the performance is limitedfor objects close to the floor such as for foot tracking. In addition, the rather low power infrared lighting setup prevents marker detection during fast motions. With one target attached to the glasses the system determines the position and orientation of the main user's head. From this information it is possible to estimate the position of the user's eyes. A dynamic asymmetric view frustum is used to provide undistorted stereoscopic imagery to the main user.

3.2.  Installation of the Kinect Sensor

The spatial restrictions of the DAVE system leaves three options for the placement of the Kinect Sensor:

  • At the ceiling, similar to the floor projection using a mirror

  • At the back opening of the DAVE

  • Above the front wall

Figure 2. Sample screen shot of the test application 'SkeletalViewer' when the Kinect is placed at the ceiling of the DAVE.

Sample screen shot of the test application 'SkeletalViewer' when the Kinect is placed at the ceiling of the DAVE.

To evaluate the suitability of these positions the resulting data of the Kinect are analyzed with the test application 'SkeletalViewer' which is part of Microsoft's software development kit to visualize recognized skeletons. The application also shows the camera image and depth map. The position at the ceiling, next to the floor projector implies viewing angle much too steep for the recognition of the user standing in the DAVE (see Figure 2). The test application is not able to stably reconstruct a skeleton for that angle.

Placing the Kinect at the back opening of the DAVE gives much better results for the recognition of skeletons. Users are visible from the back and the software is assuming to see them from the front. This is not an issue for the navigation control. The setup has two disadvantages. First, observers at the outside of the DAVE might occlude the participant. For user studies this would be a problem due to the spatial restrictions of the current DAVE location. Second, the recognition of arm gestures is very limited because the body of the test user occludes the arms in many situations.

In the third configuration the Kinect is placed on top of the front wall (see Figure 3). The viewing angle to the user is again rather steep, but still acceptable for the recognition of the skeleton. The participant is completely visible only if she stands about 1.5 m away from the front wall. If she moves closer, the feet and legs are outside of the viewing frustum of the Kinect sensor. The person is never occluded by observers and also her arms are visible as long as she is facing the front wall. Therefore this position for the Kinect is judged as the most suitable and used subsequently.

Figure 3. Sample screen shot of the test application 'SkeletalViewer' when the Kinect is placed above the front wall.

Sample screen shot of the test application 'SkeletalViewer' when the Kinect is placed above the front wall.


Both the Kinect sensor as well as the optical tracking system use infrared light for sensing. However, with the chosen position, all sensors do not directly see the light sources of the other system respectively, so that they do not interfere and can be used in parallel.

4.  Implementation

4.1.  Kinect Application

For accessing and controlling the Kinect the Microsoft software development kit (SDK) is used. To acquire the skeleton data from the Kinect and use it in the DAVE applications, the test application 'SkeletalViewer' is adapted. The application shows the camera output and depth image and also any detected skeletons. The recognized skeleton data consists of twenty 3D points for twenty joints of the human body. Arms and legs are divided into three segments, the head is one segment and the hip consists of two segments. With our modification the joints data is made available over a standard TCP/IP socket to the DAVE controller application. This is done analogously to the optical tracking system.

4.2.  Calibration of the Kinect

The test application works within the coordinate system of the Kinect. 3D data is sent without any modification. To match the coordinate system of the DAVE, a transformation matrix composed by two small rotations ( < 45° ) and a translation is defined. Scaling is not necessary as both coordinate systems use meter as their unit. To calibrate the Kinect the already calibrated optical tracking system of the DAVE is used. Using the knowledge of the head position in both systems it is easy to calculate the required transformation. The user has to stand in the middle of the DAVE with her arms spread to form a T-shape. Now the head position and the midpoint of the feet represent the up vector of the Kinect coordinate system and the elbow joints can be used to calculate a vector pointing to the right. Those two vectors are used to calculate the pitch and yaw rotation. The latter is necessary because the Kinect is not placed at the exact center of the front wall. Finally, a translation offset is calculated using the rotated head joint of the Kinect data and the head position of the optical tracking. While other methods can lead to a much more accurate registration, this method is already sufficient for our purposes.

4.3.  Test Data for Development of Movement Gestures

To develop a gesture for forward movement test data of people walking in place in the DAVE is collected. The participant is told to walk slowly for 30 seconds and then walk faster for another 30 seconds. The joints positions are recorded and analyzed for obvious repeating patterns. The positions of the feet turn out to be too noisy to be usable. The knees positions show a more favorable pattern. Figure 4 shows the up/down motion (blue) of the knees and the forward/backward motion (green). It also shows the same data for the ankle (red and turquoise). Even a simple sign function of the knee position relative to the respective mean shows a distinct pattern (yellow and pink). The movement gesture thus is based on this observation.

Figure 4. Selected test data of a person walking in place, recorded by the Kinect at 30 Hertz.

Selected test data of a person walking in place, recorded by the Kinect at 30 Hertz.


4.4.  Navigation Control Module

The navigation control module consists of two functions:

  1. Rotation, centered at the participant

  2. Moving forward

The rotation is realized using a combination of the shoulder joints and the hip joints. Shoulder orientation weighs twice the hip orientation. If the user rotates her shoulders to the left, the navigator starts rotating the world to the right. By also rotating the lower body to the left, the speed of rotation to the right is increased. This technique allows the user to freely look around without influencing navigation. Visual feedback is given in the form of an arrow on the floor pointing towards the recognized direction. As the Kinect can only recognize the skeleton of a person if the legs and arms are not occluded, the maximal allowed rotation angle is less than 45 degree. If the person is facing one of the side walls, self-occlusion prevents the Kinect to accurately determine the correct rotation angle.

To move forward in the virtual world, the user has to physically move his legs up and down. The sign change of the local distance of the knees in the direction of traveling is used to trigger impulses of movement (pink plot in Fig.  4). The height of the feet influences the amount of force for the impulse, simulating the fact that a tall person makes larger steps. To prevent a jerky movement, the forward motion does not stop abruptly but simulates a simple kind of inertia as long as the legs are moving. Otherwise, a damping actively slows down the forward motion.

4.5.  Inclusion of Indoor Navigation on Smartphones

In the DAVE environment the display of a smartphone is not occluded by the shutter glasses. Thus interaction with the smart phone is as natural as in reality. Via the tracking system the position of the user in the virtual infrastructure as well as the direction of the head are known. This information can be used to adjust the location (in 3D also accounting for the floor level in a multi-story infrastructure) and viewing angle in the navigation app.

Additionally random noise or imprecision can be added to the position and viewing angle supplied to the app at will mirroring imprecision of real localizations of smartphones. As not only the current position is known but also the location history a great number of different localization techniques can be implemented such as proximity sensing where the location is provided by information of being close to beacons provided via near field communication. In this way smartphones can also interact with the virtual representations of dynamic screens and info boards.

While the user interacts with the smartphone, detailed data on the interaction can be collected easily. This makes it possible to monitor which functionality of the app are accessed at all, how the user uses the app and where she requests which information. This makes a detailed analysis of the usability of the app possible.

4.6.  Data Collection and Organization

Figure 5. DataCockpit representing measurement data of one participant.

DataCockpit representing measurement data of one participant.

The measurement of smartphone interactions provides essential information for the evaluation of indoor navigation systems albeit requiring that appropriate context information is at hand.

In the established setup the required context information can be acquired from multiple data sources enabling inter alia the following work flow: The participant is asked to complete a survey before entering the DAVE to obtain e.g. socio-demographic information as well as her previous experience with navigation systems. Then while the person is within the DAVE she is asked to perform a set of given tasks in the virtual infrastructure (e.g. find the next newspaper kiosk, buy something to eat), while at the same time a number of measurements are carried out. The participant's trajectory and viewing angle provide essential context information which is easily acquired by the navigation system. In addition the person's thoughts are recorded using a dictaphone by asking her to articulate her thoughts loudly while navigating within the environment (thinking out loud technique). Furthermore a human observer observing the DAVE from the outside (via the missing back wall) documents anything which might be of interest by creating annotations making use of a shadowing tool [ MRSF12 ]. Finally after having completed her given tasks, the participant exits the DAVE and fills out another survey thereby documenting her newly acquired experiences.

Once the measured data is collected and centrally stored, an integrated consolidation of all measured data is essential for the subsequent analysis, implicitly demanding that synchronization amongst all data acquiring entities is guaranteed. Technically this requirement is fulfilled by utilizing the Network Time Protocol and by manual clock adjustment required for the dictaphone capturing the thinking out loud recordings. Once synchronization is carried out and the experiment is conducted in a pipelined fashion, the acquired data can be automatically assigned to the related participant resulting in storing the data in an organized way.

A detailed analysis of measured data of a large number of participants is cumbersome and requires a significant amount of work and effort. However such a detailed analysis might not always be required. For instance verifying that an indoor navigation system did not work as expected at specific locations can be verified without the necessity of detailed analysis, but rather by simply having an integrated view on the data. Due to this reason and to provide support for the analysis in general the application DataCockpit has been developed.

DataCockpit is best explained in comparison to a multimedia player application, which supports the representation of synchronized audio, video and possibly subtitle content. But whereas a multimedia player is restricted to representing only those contents, DataCockpit allows the representation of a much wider range of content, including annotations, trajectories as well as smartphone interactions. Furthermore the extraction of valuable information, which can only be attained from an integrated view, is laborious and can be greatly eased by using the application. For instance the extraction of the time spent walking from point A to point B can be extracted in DataCockpit fast and comprehensible by making use of its functionality and integrated data view.

Figure 5 depicts a screen shot of the application representing measurement data of one participant at a selected point in time. The red line represents the participant's trajectory and the yellow cone shows the person's current line of sight. Along the trajectory, flags are shown indicating the participant's position when the annotations or smartphone interactions were captured. Furthermore beneath the infrastructure plan the chronology of those annotations and smartphone interactions is shown. Finally the list on the right provides an overview of the annotations and smartphone interactions and the related functionality to insert, delete, alter and to organize those by defining categories (e.g. external annotations).

The application provides the functionality to represent (to play) the acquired data or seek within it, either by time through double clicking on the presented time line (green bar at the bottom) or by space through double clicking within the infrastructure plan seeking to the nearest position of the participant's trajectory.

DataCockpit is written in Java and makes use of the Xuggler [ Con11 ] library, thereby providing a large range of support for audio and video codecs. Its architecture has been designed to aid the integration of additional data sources (e.g. eye tracking information) which can be carried out easily enabling an integrated view.

Figure 6. Simple test scenario for the pilot study. The distance of the round column is adjustable between 8 and 15 meters.

Simple test scenario for the pilot study. The distance of the round column is adjustable between 8 and 15 meters.

5.  Pilot-Study

The development of the navigation method was followed with a series of tests in order to evaluate the usability and validity of the method. To this end three tests have been performed using two different groups of users:

  1. Usability test.

  2. Comparison to navigation using joystick.

  3. Validation of distance, timing and directional estimation in the DAVE environment.

The first two tests were conducted in a test setting that is depicted in Figure 6. Here the first test was directed towards exploring the intuitiveness of the method of navigation. The main questions answered by this test was whether a short explanation of the navigation method was sufficient for the test persons to complete complex navigation tasks. In this respect test participants were asked to follow a particular walking path in the virtual world shown in Figure 6 with 13m distance between the arches and the column. The path involved circling the column three times and walking under both archways two times. A recorded walking path is depicted in Figure 7. The minimum walking path in this scenario amounts to approximately 100 m.

Figure 7. Recordings of a user walking the simple test scenario. The distance of the round column is set to 13m.

Recordings of a user walking the simple test scenario. The distance of the round column is set to 13m.

All 39 persons involved in this experiment completed the test successfully. The average path length amounted to 182 m (with a standard deviation of 37 m) ranging from 135 m up to 324 m indicating that the test persons took some detours. On average the participants needed 263 seconds to complete the task with an average speed of 0.75 m/s. This is about half the speed typical given as the mean speed (i.e. 1.34 m/s). If one calculates the speed only on the second half of the sample for each participant, then the average speed amounts to 0.96 m/s which is closer to the expected speed considering also that the task involves turning movements. A 2d-histogram of the corresponding trajectories with a spatial resolution of 25 cm is provided in Figure  8. The smaller speed in cornering the column (upper part of the figure) and the two archways (lower part of the plot) is clearly visible. One can also observe some outliers representing longer detours of persons due to initial difficulties in mastering the maneuvering. This shows that while there is some learning involved, the navigation method is intuitive enough such that complex tasks can be executed without requiring long training.

Figure 8. 2D-histogram of user trajectories in the simple test scenario.

2D-histogram of user trajectories in the simple test scenario.

The second step in the validation of the navigation method was to compare the method to navigation using a joystick. For this test the distance of the column in scenario Figure 6 was adjusted between 8 and 15 meters. In this way the distance perception can be tested with different lengths. The main hypothesis tested here is that distance perception is more accurate if navigation is performed using the suggested method in comparison to navigation using the joystick.

A total of fourteen users (4 female, 10 male) attended the pilot study. Before walking through the virtual world the participants were asked to estimate a distance of 6 meters in the real world only by vision in order to assess the individuals ability to estimate distances. As typical for such tasks the persons underestimated the length: the median estimation was 5.19 meters with a standard deviation of 0.7 m. The mean absolute percentage error was 15.6 %. Participants were not told the true distance to avoid an influence of the estimation in the virtual world.

The task in the DAVE was to walk through one of the archways, turn around at the column and walk back through the other archway. Seven persons used the Kinect navigation method and the other seven a pointing device. So half of the participants had to move their legs to move forward and the other half just pressed a button. Afterwards the participants were asked two questions:

  • Did you have difficulties navigating through the archway?

  • How long is the distance from the archway to the column?

All of the participants had no problem navigating through the archway. The estimation of the distance to the column is shown in Table 1.

Table 1.  Estimated distance in meters in the virtual world.

Person

Navigation

actual

estimated

Difference (m)

Real world

distance (m)

distance (m)

estim. of 6 m

1

Kinect

15

12.0

3.0

4.40

2

Kinect

15

12.0

3.0

4.72

3

Kinect

10

10.0

0.0

4.50

4

Kinect

15

10.0

5.0

5.15

5

Kinect

15

7.0

8.0

5.60

6

Kinect

15

20.0

5.0

4.65

7

Kinect

15

17.5

-2.5

5.15

8

Joystick

10

7.0

3.0

5.32

9

Joystick

15

13.0

2.0

4.80

10

Joystick

15

13.0

2.0

6.25

11

Joystick

15

12.0

3.0

5.10

12

Joystick

10

8.0

2.0

4.40

13

Joystick

10

13.0

-3.0

6.48

14

Joystick

15

12.0

3.0

6.18

standard deviation, total: 2.86, Kinect: 3.47; Joystick: 2.14

         

The average error in estimating the distance is 23.1 %. For the joystick navigation only it is 20.9 % and for the Kinect navigation only it is 25.2 %. The Kinect navigation thus had no relevant effect on the ability to estimate distances. Potentially the test scenario was not elaborate enough for the walking movement to impact distance cognition. Most of the participants estimated the distance by looking at the geometry and not by the walking time or physical stress. A test with a longer travel distance should be used in further studies.

The third test was performed as a sequel to the first test. The main hypothesis there was that perception of distances, timing and directions in the DAVE does not differ significantly from perception in the real world. The test was executed in a classical parallel experiment setting with a subset of 21 persons performing the tasks in the DAVE and the remaining 14 persons in the corresponding real environment (four subjects did either not show up for the second part of the test or did not complete the test due to cyber sickness). The main finding in this respect were that statistical tests did not find any significant differences in perceptions. Details are documented in  [ BSS13 ].

6.  Conclusion

In this paper an approach for hands-free navigation in an immersive environment has been described.

Using the Microsoft Kinect in the four-sided DAVE, a method for navigation and movement controls using the user's gestures and postures is designed and implemented.Compared to other solutions, the installation of the Kinect sensor is inexpensive and can be realized in limited space. As the proposed technique is completely vision-based, the user does not have to learn how to use a new device. Only a short learning phase of body movements is needed. This is a more intuitive interaction and more related to real walking than commonly used approaches based on special input devices (3D joystick, cyber gloves, etc.), which compromise the user's VR presence.

The approach is already implemented and tested with respect to the perceived realism. However, the sample sizes and the number of scenarios tested are rather limited. Therefore additional experiments are needed. The perception of distances and walking time will be the focus of further investigations in this respect.

The new navigation method builds the cornerstone for a test lab that is used to evaluate the effectiveness of smartphone based indoor navigation systems. In this respect the measurement setting as well as the integration of indoor navigation systems has been described above. Therefore the DAVE is fully equipped to be used as a lab for testing usability of indoor navigation systems.

7.  Acknowledgments

This research is partly funded via the project IMITATE funded by the BMVIT under the research initiative IV2SPlus which is gratefully acknowledged. This research was partially done for Fraunhofer IDM@NTU, which is funded by the National Research Foundation (NRF) and managed through the multi-agency Interactive & Digital Media Programme Office (IDMPO) hosted by the Media Development Authority of Singapore (MDA). The authors would also like to thank Torsten Ullrich, Jasmin Schneckenburger and Christian Kogler for their support.

Bibliography

[AVJ07] Nicoletta Adamo-Villani David Jones Travel in Smile: a study of two immersive motion control techniques Proceedings of 2007 International Association for Development of the Information Society (IADIS) International Conference on Computer Graphics and Visualization 2007 1pp. 43—49 978-972-8924-39-3

[BBH05] Steffi Beckhaus Kristopher J. Blom Matthias Haringer Intuitive, Hands-free Travel Interfaces for Virtual Environments Proceedings of the 2005 IEEE Conference on Virtual Reality, Workshop on New Directions in 3D User Interfaces,  2005 pp. 57—60 3-8322-3830-1

[BDA99] Patrick Bourdot Martin Dromigny Laurent Arnal Virtual navigation fully controlled by head tracking Proceedings of the 7th International Scientific Workshop on Virtual Reality and Prototyping,  1999 pp. 1—9 www-sop.inria.fr/epidaure/Collaborations/GT-RV/JT-GT-RV7/.

[BKH97] Doug A. Bowman David Koller Larry F. Hodges Travel in Immersive Virtual Environments: An Evaluation of Viewpoint Motion Control Techniques Proceedings of the 1997 Virtual Reality Annual International Symposium,  1997 pp. 45—52 DOI 10.1109/VRAIS.1997.5830430-8186-7843-7

[BKLP05] Doug A. Bowman Ernst Kruijff Joseph J. LaViola Ivan Poupyrev 3D User Interfaces: Theory and Practice Addison-Wesley Professional Boston 2005 978-0-201-75867-2

[BSS13] Dietmar Bauer Jasmin Schneckenburger Volker Settgast Alex Millonig Georg Gartner Hands free steering in a virtual world for the evaluation of guidance systems in pedestrian infrastructures: design and validation Proceedings of the Annual Meeting of the TRB,  TRBWashington2013 13-1484.

[Con11] ConnectSolutions Xugglerwww.xuggle.com/xuggler2011 last visited April 1st, 2014.

[FHH03] Dieter W. Fellner Sven Havemann Armin Hopp DAVE - Eine neue Technologie zur preiswerten und hochqualitativen immersiven 3D-Darstellung (DAVE - A new technology for inexpensive and high-quality immersive 3D visualization) 8. Workshop Sichtsysteme - Visualisierung in der Simulationstechnik,  2003 Reinhard Möller (Eds.) pp. 77—87 Aachen Shaker Verlag 3-8322-2151-4

[FRE03] Kiran J. Fernandes Vinesh Raja Julian Eyre Cybersphere: the fully immersive spherical projection system Communications of the ACM,  46 2003 9 141—146 DOI 10.1145/903893.9039290001-0782

[IYFN05] Hiroo Iwata Hiroaki Yano Hiroyuki Fukushima Haruo Noma CirculaFloor IEEE Computer Graphics and Applications,  25 2005 1 64—67 DOI 10.1109/MCG.2005.50272-1716

[LFKZ01] Joseph J. LaViola Jr. Daniel Acevedo Feliz Daniel F. Keefe Robert C. Zeleznik Hands-free multi-scale navigation in virtual environments I3D '01 Proceedings of the 2001 Symposium on Interactive 3D Graphics,  2001 9—15 DOI 10.1145/364338.3643391-58113-292-1

[JKS11] Thomas Jung Stephan Krohn Peter Schmidt Ein Natural User Interface zur Interaktion in einem CAVE Automatic Virtual Environment basierend auf optischem Tracking (A natual user interface for interaction in a CAVE based on optical tracking) 3D-NordOst 2011: Tagungsband / 14. Anwendungsbezogener Workshop zur Erfassung, Modellierung, Verarbeitung und Auswertung von 3D-Daten im Rahmen der GFaI-Workshop-Familie NordOst 2011 93-102 978-3-942709-03-3

[KHR11] Tobias Kretz Stefan Hengst Vidal Roca Antonia Pérez Arias Simon Friedberger Uwe D. Hanebeck Calibrating Dynamic Pedestrian Route Choice with an Extended Range Telepresence System 2011 IEEE International Conference on Computer Vision Workshops,  2011 pp. 166—172 DOI 10.1109/ICCVW.2011.6130239978-1-4673-0061-2

[LSF08] Marcel Lancelle Volker Settgast Dieter W. Fellner Definitely Affordable Virtual Environment IEEE Virtual Reality Conference 2008,  2008 978-1-4244-1972-2

[LSFP07] Robert Leeb Volker Settgast Dieter Fellner Gert Pfurtscheller Self-paced exploration of the Austrian National Library through thought International Journal of Bioelectromagnetism,  9 2007 4 237—244 1456-7865

[Min9595] Mark R. Mine Virtual Environment Interaction Techniques University of North Carolina at Chapel Hill 1995 Technical Report TR95-018.

[MRSF12] Alexandra Millonig Markus Ray Helmut Schrom-Feiertag Monitoring Pedestrian Spatio-Temporal Behaviour using Semi-Automated Shadowing Online Research Methods in Urban and Planning Studies: Design and Outcomes,  IGI Global 2012 C. Nunes Silva (Ed.),  pp. 312—333 DOI 10.4018/978-1-4666-0074-4.ch019978-1-466-60074-4

[STU07] Martin Schwaiger Thomas Thümmel Heinz Ulbrich Julie A. Jacko Cyberwalk: Implementation of a Ball Bearing Platform for Humans Human-Computer Interaction. Interaction Platforms and Techniques,  Lecture Notes in Computer ScienceVol. 4551 2007 pp. 926—935 DOI 10.1007/978-3-540-73107-8_102978-3-540-73106-1

[SUS95] Mel Slater Martin Usoh Anthony Steed Taking steps: the influence of a walking technique on presence in virtual reality ACM Trans. Comput.-Hum. Interact.,  2 1995 3 201—219 DOI 10.1145/210079.2100841073-0516

[Vol14] Joseph Volpe Google Maps 6.0 hits Android, adds indoor navigation for retail and transit Engadget http://www.engadget.com/2011/11/29/google-maps-6-0-hits-android-adds-indoor-navigation-for-retail/2014 last visited April 1st, 2014.

[WO90] Colin Ware Steven Osborne Exploration and virtual camera control in virtual three dimensional environments I3D '90 Proceedings of the 1990 symposium on Interactive 3D,  1990 pp. 175—183 DOI 10.1145/91385.914420-89791-351-5



[1] The basics of the approach have also been included in [ BSS13 ]

Fulltext

License

Any party may pass on this Work by electronic means and make it available for download under the terms and conditions of the Digital Peer Publishing License. The text of the license may be accessed and retrieved at http://www.dipp.nrw.de/lizenzen/dppl/dppl/DPPL_v2_en_06-2004.html.