Home / Issues / 14.2017
Document Actions

14.2017

Up one level
  1. 2019-02-04

    The Virtual Pole: Exploring Human Responses to Fear of Heights in Immersive Virtual Environments

    Measuring how effective immersive virtual environments (IVEs) are in reproducing sensations as in similar situations in the real world is an important task for many application fields. In this paper, we present an experimental setup which we call the virtual pole, in which we evaluated human responses to fear of heights. We conducted a set of experiments in which we analyzed correlations between subjective and physiological anxiety measures as well as the participant's view direction. Our results show that the view direction plays an important role in subjective and physiological anxiety in an IVE due to the limited field of view (FOV), and that the subjective and physiological anxiety measures monotonically increase with the increasing height. In addition, we also found that participants recollected the virtual content they saw at the top height more accurately compared to that at the medium height. We discuss the results and provide guidelines for simulations aimed at evoking fear of heights responses in IVEs.

    JVRB, 14(2017), no. 6.

  2. 2018-05-25

    Audiovisual perception of real and virtual rooms

    Virtual environments utilized in experimental perception research are normally required to provide rich physical cues if they are to yield externally valid perceptual results. We investigated the perceptual difference between a real environment and a virtual environment under optical, acoustic, and optoacoustic conditions by conducting a 2 x 3 mixed design, with environment as a between-subjects factor and domain as a within-subjects factor. The dependent variables comprised auditory, visual, and audiovisual features including geometric estimates, aesthetic judgments, and sense of spatial presence. The real environment consisted of four visible loudspeakers in a small concert hall, playing back an anechoic multichannel recording of a string quartet. In the virtual environment, deemed the Virtual Concert Hall, the scene was reproduced three-dimensionally by applying dynamic binaural synthesis and stereoscopic projection on a 160° cylindrical screen. Most unimodal features were rated almost equally across the environments under both the optical/acoustic and the optoacoustic conditions. Estimates of geometric dimensions were lower (though not necessarily less accurate) in the virtual than in the real environment. Aesthetic features were rated almost equally across the environments under the acoustic condition, but not the optical, and similarly under the optoacoustic condition. Further results indicate that unimodal features of room perception might be subject to cognitive reconstruction due to both information acquired from another stimulus domain and abstract experiential knowledge of rooms. In conclusion, the validity of the Virtual Concert Hall for certain experimental applications is discussed.

    JVRB, 14(2017), no. 5.

VRIC 2015
  1. 2018-06-06

    A Classification of Human-to-Human Communication during the Use of Immersive Teleoperation Interfaces

    We propose a classification of human-to-human communication during the use of immersive teleoperation interfaces based on real-life examples. While a large body of research is concerned with communication in collaborative virtual environments (CVEs), less research focuses on cases where only one of two communicating users is immersed in a virtual or remote environment. Furthermore, we identify the unmediated communication between co-located users of an immersive teleoperation interface as another conceptually important — but usually neglected — case. To cover these scenarios, one of the dimensions of the proposed classification is the level of copresence of the communicating users. Further dimensions are the virtuality of the immersive environment, the virtual transport of the immersed user(s), the point of view of the user(s), the asynchronicity of the users’ communication, the communication channel, and the mediation of the communication. We find that an extension of the proposed classification to real environments can offer useful reference cases. Using this extended classification not only allows us to discuss and understand differences and similarities of various forms of communication in a more systematic way, but it also provides guidelines and reference cases for the design of immersive teleoperation interfaces to better support human-to-human communication.

    JVRB, 14(2017), no. 1.

EuroVR 2016
  1. 2019-02-04

    3D reconstruction with a markerless tracking method of flexible and modular molecular physical models: towards tangible interfaces

    Physical models have always been used in the field of molecular science as an understandable representation of complex molecules, particularly in chemistry. Even if physical models were recently completed by numerical in silico molecular visualizations which offer a wide range of molecular representations and rendering features, they are still involved in research work and teaching, because they are more suitable than virtual objects for manipulating and building molecular structures. In this paper, we present a markerless tracking method to construct a molecular virtual representation from a flexible and modular physical model. Our approach is based on a single RGB camera to reconstruct the physical model in interactive time in order to use it as a tangible interface, and thus benefits from both physical and virtual representations. This method was designed to require only a light virtual and augmented reality hardware setup, such as a smartphone or HMD & mounted camera, providing a markerless molecular tangible interface suitable for a classroom context or a classical biochemistry researcher desktop. The approach proposes a fast image processing algorithm based on color blob detection to extract 2D atom positions of a user-defined conformation in each frame of a video. A tracking algorithm recovers a set of 2D projected atom positions as an input of the 3D reconstruction stage, based on a Structure From Motion method. We tuned this method to robustly process a few key feature points and combine them within a global point cloud. Biological knowledge drives the final reconstruction, filling missing atoms to obtain the desired molecular conformation.

    JVRB, 14(2017), no. 2.

  2. 2018-06-26

    HOM3R: A 3D Viewer for Complex Hierarchical Product Models

    We present HOM3R, a novel 3D viewer designed to manage complex industrial product models. The viewer includes a JavaScript API to interface with existing or new browser-based applications. We extend state-of-the art interaction techniques and introduce a novel navigation metaphor to navigate around complex products using constrained trajectories. To address the challenge of discovering and accessing the parts of the complex object, which are not visible, a set of occlusion management techniques have been implemented. The viewer presents other useful features such as hierarchical part selection and linking of information to 3D geometry. A user-centred evaluation of the tool has been carried out and is described in the paper

    JVRB, 14(2017), no. 3.

ACE 2016
  1. 2019-01-17

    Games as Blends: Understanding Hybrid Games

    The meaning of what hybrid games are is often fixed to the context in which the term is used. For example, hybrid games have often been defined in relation to recent developments in technology. This creates issues in the terms usage and limitations in thinking. This paper argues that hybrid games should be understood through conceptual metaphors. Hybridity is the blending of different cognitive domains that are not usually associated together. Hybrid games usually blend domains related to games, for example digital and board games, but can blend also other domains. Through viewing game experiences as blends from different domains, designers can understand the inherent hybridity in various types of games and use that understanding when building new designs.

    JVRB, 14(2017), no. 4.