Citation and metadata
Recommended citation
Ingo Schiller, Bogumil Bartczak, Falko Kellner, and Reinhard Koch, Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera. JVRB - Journal of Virtual Reality and Broadcasting, 7(2010), no. 4. (urn:nbn:de:0009-6-25786)
Download Citation
Endnote
%0 Journal Article %T Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera %A Schiller, Ingo %A Bartczak, Bogumil %A Kellner, Falko %A Koch, Reinhard %J JVRB - Journal of Virtual Reality and Broadcasting %D 2010 %V 7(2010) %N 4 %@ 1860-2037 %F schiller2010 %X For broadcasting purposes MIXED REALITY,the combination of real and virtual scene content, hasbecome ubiquitous nowadays. Mixed Reality recordingstill requires expensive studio setups and is often limitedto simple color keying. We present a system for MixedReality applications which uses depth keying andprovides threedimensional mixing of real and artificialcontent. It features enhanced realism through automaticshadow computation which we consider a core issue toobtain realism and a convincing visual perception,besides the correct alignment of the two modalities andcorrect occlusion handling. Furthermore we present apossibility to support placement of virtual content in thescene.Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content anddynamic object tracking for content planning. %L 004 %K 3D-Modeling %K 3DTV %K Depth Keying %K GPU %K Mixed Reality %K PMD %K Shadow Mapping %K TOF %K Time-of-Flight %R 10.20385/1860-2037/7.2010.4 %U http://nbn-resolving.de/urn:nbn:de:0009-6-25786 %U http://dx.doi.org/10.20385/1860-2037/7.2010.4Download
Bibtex
@Article{schiller2010, author = "Schiller, Ingo and Bartczak, Bogumil and Kellner, Falko and Koch, Reinhard", title = "Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera", journal = "JVRB - Journal of Virtual Reality and Broadcasting", year = "2010", volume = "7(2010)", number = "4", keywords = "3D-Modeling; 3DTV; Depth Keying; GPU; Mixed Reality; PMD; Shadow Mapping; TOF; Time-of-Flight", abstract = "For broadcasting purposes MIXED REALITY,the combination of real and virtual scene content, hasbecome ubiquitous nowadays. Mixed Reality recordingstill requires expensive studio setups and is often limitedto simple color keying. We present a system for MixedReality applications which uses depth keying andprovides threedimensional mixing of real and artificialcontent. It features enhanced realism through automaticshadow computation which we consider a core issue toobtain realism and a convincing visual perception,besides the correct alignment of the two modalities andcorrect occlusion handling. Furthermore we present apossibility to support placement of virtual content in thescene.Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content anddynamic object tracking for content planning.", issn = "1860-2037", doi = "10.20385/1860-2037/7.2010.4", url = "http://nbn-resolving.de/urn:nbn:de:0009-6-25786" }Download
RIS
TY - JOUR AU - Schiller, Ingo AU - Bartczak, Bogumil AU - Kellner, Falko AU - Koch, Reinhard PY - 2010 DA - 2010// TI - Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera JO - JVRB - Journal of Virtual Reality and Broadcasting VL - 7(2010) IS - 4 KW - 3D-Modeling KW - 3DTV KW - Depth Keying KW - GPU KW - Mixed Reality KW - PMD KW - Shadow Mapping KW - TOF KW - Time-of-Flight AB - For broadcasting purposes MIXED REALITY,the combination of real and virtual scene content, hasbecome ubiquitous nowadays. Mixed Reality recordingstill requires expensive studio setups and is often limitedto simple color keying. We present a system for MixedReality applications which uses depth keying andprovides threedimensional mixing of real and artificialcontent. It features enhanced realism through automaticshadow computation which we consider a core issue toobtain realism and a convincing visual perception,besides the correct alignment of the two modalities andcorrect occlusion handling. Furthermore we present apossibility to support placement of virtual content in thescene.Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content anddynamic object tracking for content planning. SN - 1860-2037 UR - http://nbn-resolving.de/urn:nbn:de:0009-6-25786 DO - 10.20385/1860-2037/7.2010.4 ID - schiller2010 ER -Download
Wordbib
<?xml version="1.0" encoding="UTF-8"?> <b:Sources SelectedStyle="" xmlns:b="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" xmlns="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" > <b:Source> <b:Tag>schiller2010</b:Tag> <b:SourceType>ArticleInAPeriodical</b:SourceType> <b:Year>2010</b:Year> <b:PeriodicalTitle>JVRB - Journal of Virtual Reality and Broadcasting</b:PeriodicalTitle> <b:Volume>7(2010)</b:Volume> <b:Issue>4</b:Issue> <b:Url>http://nbn-resolving.de/urn:nbn:de:0009-6-25786</b:Url> <b:Url>http://dx.doi.org/10.20385/1860-2037/7.2010.4</b:Url> <b:Author> <b:Author><b:NameList> <b:Person><b:Last>Schiller</b:Last><b:First>Ingo</b:First></b:Person> <b:Person><b:Last>Bartczak</b:Last><b:First>Bogumil</b:First></b:Person> <b:Person><b:Last>Kellner</b:Last><b:First>Falko</b:First></b:Person> <b:Person><b:Last>Koch</b:Last><b:First>Reinhard</b:First></b:Person> </b:NameList></b:Author> </b:Author> <b:Title>Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera</b:Title> <b:Comments>For broadcasting purposes MIXED REALITY,the combination of real and virtual scene content, hasbecome ubiquitous nowadays. Mixed Reality recordingstill requires expensive studio setups and is often limitedto simple color keying. We present a system for MixedReality applications which uses depth keying andprovides threedimensional mixing of real and artificialcontent. It features enhanced realism through automaticshadow computation which we consider a core issue toobtain realism and a convincing visual perception,besides the correct alignment of the two modalities andcorrect occlusion handling. Furthermore we present apossibility to support placement of virtual content in thescene.Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content anddynamic object tracking for content planning.</b:Comments> </b:Source> </b:Sources>Download
ISI
PT Journal AU Schiller, I Bartczak, B Kellner, F Koch, R TI Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera SO JVRB - Journal of Virtual Reality and Broadcasting PY 2010 VL 7(2010) IS 4 DI 10.20385/1860-2037/7.2010.4 DE 3D-Modeling; 3DTV; Depth Keying; GPU; Mixed Reality; PMD; Shadow Mapping; TOF; Time-of-Flight AB For broadcasting purposes MIXED REALITY,the combination of real and virtual scene content, hasbecome ubiquitous nowadays. Mixed Reality recordingstill requires expensive studio setups and is often limitedto simple color keying. We present a system for MixedReality applications which uses depth keying andprovides threedimensional mixing of real and artificialcontent. It features enhanced realism through automaticshadow computation which we consider a core issue toobtain realism and a convincing visual perception,besides the correct alignment of the two modalities andcorrect occlusion handling. Furthermore we present apossibility to support placement of virtual content in thescene.Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content anddynamic object tracking for content planning. ERDownload
Mods
<mods> <titleInfo> <title>Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera</title> </titleInfo> <name type="personal"> <namePart type="family">Schiller</namePart> <namePart type="given">Ingo</namePart> </name> <name type="personal"> <namePart type="family">Bartczak</namePart> <namePart type="given">Bogumil</namePart> </name> <name type="personal"> <namePart type="family">Kellner</namePart> <namePart type="given">Falko</namePart> </name> <name type="personal"> <namePart type="family">Koch</namePart> <namePart type="given">Reinhard</namePart> </name> <abstract>For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.</abstract> <subject> <topic>3D-Modeling</topic> <topic>3DTV</topic> <topic>Depth Keying</topic> <topic>GPU</topic> <topic>Mixed Reality</topic> <topic>PMD</topic> <topic>Shadow Mapping</topic> <topic>TOF</topic> <topic>Time-of-Flight</topic> </subject> <classification authority="ddc">004</classification> <relatedItem type="host"> <genre authority="marcgt">periodical</genre> <genre>academic journal</genre> <titleInfo> <title>JVRB - Journal of Virtual Reality and Broadcasting</title> </titleInfo> <part> <detail type="volume"> <number>7(2010)</number> </detail> <detail type="issue"> <number>4</number> </detail> <date>2010</date> </part> </relatedItem> <identifier type="issn">1860-2037</identifier> <identifier type="urn">urn:nbn:de:0009-6-25786</identifier> <identifier type="doi">10.20385/1860-2037/7.2010.4</identifier> <identifier type="uri">http://nbn-resolving.de/urn:nbn:de:0009-6-25786</identifier> <identifier type="citekey">schiller2010</identifier> </mods>Download
Full Metadata
Bibliographic Citation | JVRB, 7(2010), no. 4. |
---|---|
Title |
Increasing Realism and Supporting Content Planning for Dynamic Scenes in a Mixed Reality System incorporating a Time-of-Flight Camera (eng) |
Author | Ingo Schiller, Bogumil Bartczak, Falko Kellner, Reinhard Koch |
Language | eng |
Abstract | For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning. |
Subject | 3D-Modeling, 3DTV, Depth Keying, GPU, Mixed Reality, PMD, Shadow Mapping, TOF, Time-of-Flight |
Classified Subjects |
|
DDC | 004 |
Rights | DPPL |
URN: | urn:nbn:de:0009-6-25786 |
DOI | https://doi.org/10.20385/1860-2037/7.2010.4 |