Home / Issues / 13.2016 / Real-time depth camera tracking with CAD models and ICP
Document Actions

Citation and metadata

Recommended citation

Otto Korkalo, and Svenja Kahn, Real-time depth camera tracking with CAD models and ICP. Journal of Virtual Reality and Broadcasting, 13(2016), no. 1. (urn:nbn:de:0009-6-44132)

Download Citation

Endnote

%0 Journal Article
%T Real-time depth camera tracking with CAD models and ICP
%A Korkalo, Otto
%A Kahn, Svenja
%J Journal of Virtual Reality and Broadcasting
%D 2016
%V 13(2016)
%N 1
%@ 1860-2037
%F korkalo2016
%X In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.
%L 004
%K CAD model
%K Depth Camera
%K ICP
%K KINECT
%K Mixed Reality
%K Pose Estimation
%K Tracking
%R 10.20385/1860-2037/13.2016.1
%U http://nbn-resolving.de/urn:nbn:de:0009-6-44132
%U http://dx.doi.org/10.20385/1860-2037/13.2016.1

Download

Bibtex

@Article{korkalo2016,
  author = 	"Korkalo, Otto
		and Kahn, Svenja",
  title = 	"Real-time depth camera tracking with CAD models and ICP",
  journal = 	"Journal of Virtual Reality and Broadcasting",
  year = 	"2016",
  volume = 	"13(2016)",
  number = 	"1",
  keywords = 	"CAD model; Depth Camera; ICP; KINECT; Mixed Reality; Pose Estimation; Tracking",
  abstract = 	"In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.",
  issn = 	"1860-2037",
  doi = 	"10.20385/1860-2037/13.2016.1",
  url = 	"http://nbn-resolving.de/urn:nbn:de:0009-6-44132"
}

Download

RIS

TY  - JOUR
AU  - Korkalo, Otto
AU  - Kahn, Svenja
PY  - 2016
DA  - 2016//
TI  - Real-time depth camera tracking with CAD models and ICP
JO  - Journal of Virtual Reality and Broadcasting
VL  - 13(2016)
IS  - 1
KW  - CAD model
KW  - Depth Camera
KW  - ICP
KW  - KINECT
KW  - Mixed Reality
KW  - Pose Estimation
KW  - Tracking
AB  - In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.
SN  - 1860-2037
UR  - http://nbn-resolving.de/urn:nbn:de:0009-6-44132
DO  - 10.20385/1860-2037/13.2016.1
ID  - korkalo2016
ER  - 
Download

Wordbib

<?xml version="1.0" encoding="UTF-8"?>
<b:Sources SelectedStyle="" xmlns:b="http://schemas.openxmlformats.org/officeDocument/2006/bibliography"  xmlns="http://schemas.openxmlformats.org/officeDocument/2006/bibliography" >
<b:Source>
<b:Tag>korkalo2016</b:Tag>
<b:SourceType>ArticleInAPeriodical</b:SourceType>
<b:Year>2016</b:Year>
<b:PeriodicalTitle>Journal of Virtual Reality and Broadcasting</b:PeriodicalTitle>
<b:Volume>13(2016)</b:Volume>
<b:Issue>1</b:Issue>
<b:Url>http://nbn-resolving.de/urn:nbn:de:0009-6-44132</b:Url>
<b:Url>http://dx.doi.org/10.20385/1860-2037/13.2016.1</b:Url>
<b:Author>
<b:Author><b:NameList>
<b:Person><b:Last>Korkalo</b:Last><b:First>Otto</b:First></b:Person>
<b:Person><b:Last>Kahn</b:Last><b:First>Svenja</b:First></b:Person>
</b:NameList></b:Author>
</b:Author>
<b:Title>Real-time depth camera tracking with CAD models and ICP</b:Title>
<b:Comments>In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model&apos;s coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.</b:Comments>
</b:Source>
</b:Sources>
Download

ISI

PT Journal
AU Korkalo, O
   Kahn, S
TI Real-time depth camera tracking with CAD models and ICP
SO Journal of Virtual Reality and Broadcasting
PY 2016
VL 13(2016)
IS 1
DI 10.20385/1860-2037/13.2016.1
DE CAD model; Depth Camera; ICP; KINECT; Mixed Reality; Pose Estimation; Tracking
AB In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.
ER

Download

Mods

<mods>
  <titleInfo>
    <title>Real-time depth camera tracking with CAD models and ICP</title>
  </titleInfo>
  <name type="personal">
    <namePart type="family">Korkalo</namePart>
    <namePart type="given">Otto</namePart>
  </name>
  <name type="personal">
    <namePart type="family">Kahn</namePart>
    <namePart type="given">Svenja</namePart>
  </name>
  <abstract>In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.</abstract>
  <subject>
    <topic>CAD model</topic>
    <topic>Depth Camera</topic>
    <topic>ICP</topic>
    <topic>KINECT</topic>
    <topic>Mixed Reality</topic>
    <topic>Pose Estimation</topic>
    <topic>Tracking</topic>
  </subject>
  <classification authority="ddc">004</classification>
  <relatedItem type="host">
    <genre authority="marcgt">periodical</genre>
    <genre>academic journal</genre>
    <titleInfo>
      <title>Journal of Virtual Reality and Broadcasting</title>
    </titleInfo>
    <part>
      <detail type="volume">
        <number>13(2016)</number>
      </detail>
      <detail type="issue">
        <number>1</number>
      </detail>
      <date>2016</date>
    </part>
  </relatedItem>
  <identifier type="issn">1860-2037</identifier>
  <identifier type="urn">urn:nbn:de:0009-6-44132</identifier>
  <identifier type="doi">10.20385/1860-2037/13.2016.1</identifier>
  <identifier type="uri">http://nbn-resolving.de/urn:nbn:de:0009-6-44132</identifier>
  <identifier type="citekey">korkalo2016</identifier>
</mods>
Download

Full Metadata