DepthTrack : Unveiling the Power of RGBD Tracking
Yan, Song; Yang, Jinyu; Käpylä, Jani; Zheng, Feng; Leonardis, Aleš; Kämäräinen, Joni-Kristian (2021)
Lataukset:
Yan, Song
Yang, Jinyu
Käpylä, Jani
Zheng, Feng
Leonardis, Aleš
Kämäräinen, Joni-Kristian
IEEE
2021
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:tuni-202205094495
https://urn.fi/URN:NBN:fi:tuni-202205094495
Kuvaus
Peer reviewed
Tiivistelmä
RGBD (RGB plus depth) object tracking is gaining momentum as RGBD sensors have become popular in many application fields such as robotics. However, the best RGBD trackers are extensions of the state-of-the-art deep RGB trackers. They are trained with RGB data and the depth channel is used as a sidekick for subtleties such as occlusion detection. This can be explained by the fact that there are no sufficiently large RGBD datasets to 1) train “deep depth trackers” and to 2) challenge RGB trackers with sequences for which the depth cue is essential. This work introduces a new RGBD tracking dataset - DepthTrack - that has twice as many sequences (200) and scene types (40) than in the largest existing dataset, and three times more objects (90). In addition, the average length of the sequences (1473), the number of deformable objects (16) and the number of annotated tracking attributes (15) have been increased. Furthermore, by running the SotA RGB and RGBD trackers on DepthTrack, we propose a new RGBD tracking baseline, namely DeT, which reveals that deep RGBD tracking indeed benefits from genuine training data. The code and dataset is available at https://github.com/xiaozai/DeT.
Kokoelmat
- TUNICRIS-julkaisut [17001]