Home / Issues / 3.2006 / Precise Near-to-Head Acoustics with Binaural Synthesis
Document Actions

Citation and metadata

Recommended citation

Tobias Lentz, Ingo Assenmacher, Michael Vorländer, and Torsten Kuhlen, Precise Near-to-Head Acoustics with Binaural Synthesis. JVRB - Journal of Virtual Reality and Broadcasting, 3(2006), no. 2. (urn:nbn:de:0009-6-5890)

Download Citation

Endnote

%0 Journal Article
%T Precise Near-to-Head Acoustics with Binaural Synthesis
%A Lentz, Tobias
%A Assenmacher, Ingo
%A Vorländer, Michael
%A Kuhlen, Torsten
%J JVRB - Journal of Virtual Reality and Broadcasting
%D 2006
%V 3(2006)
%N 2
%@ 1860-2037
%F lentz2006
%X For enhanced immersion into a virtual scene more than just the visual sense should be addressed by a Virtual Reality system. Additional auditory stimulation appears to have much potential, as it realizes a multisensory system. This is especially useful when the user does not have to wear any additional hardware, e.g., headphones. Creating a virtual sound scene with spatially distributed sources requires a technique for adding spatial cues to audio signals and an appropriate reproduction. In this paper we present a real-time audio rendering system that combines dynamic crosstalk cancellation and multi-track binaural synthesis for virtualacoustical imaging. This provides the possibility of simulating spatially distributed sources and, in addition to that, near-to-head sources for a freely moving listener in room-mounted virtual environments without using any headphones. A special focus will be put on near-to-head acoustics, and requirements in respect of the head-related transfer function databases are discussed.
%L 004
%K Crosstalk cancellation
%K Spatial Acoustics
%K binaural synthesis
%K interactive Virtual Reality
%K multi-modality
%R 10.20385/1860-2037/3.2006.2
%U http://nbn-resolving.de/urn:nbn:de:0009-6-5890
%U http://dx.doi.org/10.20385/1860-2037/3.2006.2

Download

Bibtex

@Article{lentz2006,
  author = 	"Lentz, Tobias
		and Assenmacher, Ingo
		and Vorl{\"a}nder, Michael
		and Kuhlen, Torsten",
  title = 	"Precise Near-to-Head Acoustics with Binaural Synthesis",
  journal = 	"JVRB - Journal of Virtual Reality and Broadcasting",
  year = 	"2006",
  volume = 	"3(2006)",
  number = 	"2",
  keywords = 	"Crosstalk cancellation; Spatial Acoustics; binaural synthesis; interactive Virtual Reality; multi-modality",
  abstract = 	"For enhanced immersion into a virtual scene more than just the visual sense should be addressed by a Virtual Reality system. Additional auditory stimulation appears to have much potential, as it realizes a multisensory system. This is especially useful when the user does not have to wear any additional hardware, e.g., headphones. Creating a virtual sound scene with spatially distributed sources requires a technique for adding spatial cues to audio signals and an appropriate reproduction. In this paper we present a real-time audio rendering system that combines dynamic crosstalk cancellation and multi-track binaural synthesis for virtualacoustical imaging. This provides the possibility of simulating spatially distributed sources and, in addition to that, near-to-head sources for a freely moving listener in room-mounted virtual environments without using any headphones. A special focus will be put on near-to-head acoustics, and requirements in respect of the head-related transfer function databases are discussed.",
  issn = 	"1860-2037",
  doi = 	"10.20385/1860-2037/3.2006.2",
  url = 	"http://nbn-resolving.de/urn:nbn:de:0009-6-5890"
}

Download

RIS

TY  - JOUR
AU  - Lentz, Tobias
AU  - Assenmacher, Ingo
AU  - Vorländer, Michael
AU  - Kuhlen, Torsten
PY  - 2006
DA  - 2006//
TI  - Precise Near-to-Head Acoustics with Binaural Synthesis
JO  - JVRB - Journal of Virtual Reality and Broadcasting
VL  - 3(2006)
IS  - 2
KW  - Crosstalk cancellation
KW  - Spatial Acoustics
KW  - binaural synthesis
KW  - interactive Virtual Reality
KW  - multi-modality
AB  - For enhanced immersion into a virtual scene more than just the visual sense should be addressed by a Virtual Reality system. Additional auditory stimulation appears to have much potential, as it realizes a multisensory system. This is especially useful when the user does not have to wear any additional hardware, e.g., headphones. Creating a virtual sound scene with spatially distributed sources requires a technique for adding spatial cues to audio signals and an appropriate reproduction. In this paper we present a real-time audio rendering system that combines dynamic crosstalk cancellation and multi-track binaural synthesis for virtualacoustical imaging. This provides the possibility of simulating spatially distributed sources and, in addition to that, near-to-head sources for a freely moving listener in room-mounted virtual environments without using any headphones. A special focus will be put on near-to-head acoustics, and requirements in respect of the head-related transfer function databases are discussed.
SN  - 1860-2037
UR  - http://nbn-resolving.de/urn:nbn:de:0009-6-5890
DO  - 10.20385/1860-2037/3.2006.2
ID  - lentz2006
ER  - 
Download

Wordbib

<?xml version="1.0" encoding="UTF-8"?>
<b:Sources SelectedStyle="" xmlns:b="http://schemas.openxmlformats.org/officeDocument/2006/bibliography"  xmlns="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" >
<b:Source>
<b:Tag>lentz2006</b:Tag>
<b:SourceType>ArticleInAPeriodical</b:SourceType>
<b:Year>2006</b:Year>
<b:PeriodicalTitle>JVRB - Journal of Virtual Reality and Broadcasting</b:PeriodicalTitle>
<b:Volume>3(2006)</b:Volume>
<b:Issue>2</b:Issue>
<b:Url>http://nbn-resolving.de/urn:nbn:de:0009-6-5890</b:Url>
<b:Url>http://dx.doi.org/10.20385/1860-2037/3.2006.2</b:Url>
<b:Author>
<b:Author><b:NameList>
<b:Person><b:Last>Lentz</b:Last><b:First>Tobias</b:First></b:Person>
<b:Person><b:Last>Assenmacher</b:Last><b:First>Ingo</b:First></b:Person>
<b:Person><b:Last>Vorländer</b:Last><b:First>Michael</b:First></b:Person>
<b:Person><b:Last>Kuhlen</b:Last><b:First>Torsten</b:First></b:Person>
</b:NameList></b:Author>
</b:Author>
<b:Title>Precise Near-to-Head Acoustics with Binaural Synthesis</b:Title>
<b:Comments>For enhanced immersion into a virtual scene more than just the visual sense should be addressed by a Virtual Reality system. Additional auditory stimulation appears to have much potential, as it realizes a multisensory system. This is especially useful when the user does not have to wear any additional hardware, e.g., headphones. Creating a virtual sound scene with spatially distributed sources requires a technique for adding spatial cues to audio signals and an appropriate reproduction. In this paper we present a real-time audio rendering system that combines dynamic crosstalk cancellation and multi-track binaural synthesis for virtualacoustical imaging. This provides the possibility of simulating spatially distributed sources and, in addition to that, near-to-head sources for a freely moving listener in room-mounted virtual environments without using any headphones. A special focus will be put on near-to-head acoustics, and requirements in respect of the head-related transfer function databases are discussed.</b:Comments>
</b:Source>
</b:Sources>
Download

ISI

PT Journal
AU Lentz, T
   Assenmacher, I
   Vorländer, M
   Kuhlen, T
TI Precise Near-to-Head Acoustics with Binaural Synthesis
SO JVRB - Journal of Virtual Reality and Broadcasting
PY 2006
VL 3(2006)
IS 2
DI 10.20385/1860-2037/3.2006.2
DE Crosstalk cancellation; Spatial Acoustics; binaural synthesis; interactive Virtual Reality; multi-modality
AB For enhanced immersion into a virtual scene more than just the visual sense should be addressed by a Virtual Reality system. Additional auditory stimulation appears to have much potential, as it realizes a multisensory system. This is especially useful when the user does not have to wear any additional hardware, e.g., headphones. Creating a virtual sound scene with spatially distributed sources requires a technique for adding spatial cues to audio signals and an appropriate reproduction. In this paper we present a real-time audio rendering system that combines dynamic crosstalk cancellation and multi-track binaural synthesis for virtualacoustical imaging. This provides the possibility of simulating spatially distributed sources and, in addition to that, near-to-head sources for a freely moving listener in room-mounted virtual environments without using any headphones. A special focus will be put on near-to-head acoustics, and requirements in respect of the head-related transfer function databases are discussed.
ER

Download

Mods

<mods>
  <titleInfo>
    <title>Precise Near-to-Head Acoustics with Binaural Synthesis</title>
  </titleInfo>
  <name type="personal">
    <namePart type="family">Lentz</namePart>
    <namePart type="given">Tobias</namePart>
  </name>
  <name type="personal">
    <namePart type="family">Assenmacher</namePart>
    <namePart type="given">Ingo</namePart>
  </name>
  <name type="personal">
    <namePart type="family">Vorländer</namePart>
    <namePart type="given">Michael</namePart>
  </name>
  <name type="personal">
    <namePart type="family">Kuhlen</namePart>
    <namePart type="given">Torsten</namePart>
  </name>
  <abstract>For enhanced immersion into a virtual scene more than just the visual sense should be addressed by a Virtual Reality system. Additional auditory stimulation appears to have much potential, as it realizes a multisensory system. This is especially useful when the user does not have to wear any additional hardware, e.g., headphones. Creating a virtual sound scene with spatially distributed sources requires a technique for adding spatial cues to audio signals and an appropriate reproduction. In this paper we present a real-time audio rendering system that combines dynamic crosstalk cancellation and multi-track binaural synthesis for virtual
acoustical imaging. This provides the possibility of simulating spatially distributed sources and, in addition to that, near-to-head sources for a freely moving listener in room-mounted virtual environments without using any headphones. A special focus will be put on near-to-head acoustics, and requirements in respect of the head-related transfer function databases are discussed.</abstract>
  <subject>
    <topic>Crosstalk cancellation</topic>
    <topic>Spatial Acoustics</topic>
    <topic>binaural synthesis</topic>
    <topic>interactive Virtual Reality</topic>
    <topic>multi-modality</topic>
  </subject>
  <classification authority="ddc">004</classification>
  <relatedItem type="host">
    <genre authority="marcgt">periodical</genre>
    <genre>academic journal</genre>
    <titleInfo>
      <title>JVRB - Journal of Virtual Reality and Broadcasting</title>
    </titleInfo>
    <part>
      <detail type="volume">
        <number>3(2006)</number>
      </detail>
      <detail type="issue">
        <number>2</number>
      </detail>
      <date>2006</date>
    </part>
  </relatedItem>
  <identifier type="issn">1860-2037</identifier>
  <identifier type="urn">urn:nbn:de:0009-6-5890</identifier>
  <identifier type="doi">10.20385/1860-2037/3.2006.2</identifier>
  <identifier type="uri">http://nbn-resolving.de/urn:nbn:de:0009-6-5890</identifier>
  <identifier type="citekey">lentz2006</identifier>
</mods>
Download

Full Metadata