2022-05-16: The results of the competition have been publiced on this page.
2022-04-25: Extra training data has been released (Baidu Drive (Extraction code: VISO).
2022-04-20: Test server online. Test data has been released (google Drive ,Baidu Drive (Extraction code: VISO).
2022-03-20: The registration of the Track 2 begins and Validation server online. The registration of the Track 3 begins and Validation server online.
2022-03-15: The registration of the Track 1 begins and Validation server online. Some useful tools, such as generating json files in coco format and reading and writing XML files, have been released in GitHub.
2022-03-08: Validation data has been released (google Drive ,Baidu Drive (Extraction code: VISO). Paticipants can use the released data to develop their algorithms.
2022-03-01: Training data has been released (google Drive ,Baidu Drive (Extraction code: VISO). Paticipants can use the released data to develop their algorithms.
Satellite video cameras can provide continuous observation for a large-scale area, which is suitable for several downstream remote sensing applications including traffic management, ocean monitoring, and smart city. Recently, moving objects detection and tracking in satellite videos have attracted increasing attention in both academia and industry. However, it remains challenging to achieve accurate and robust moving object detection and tracking in satellite videos, due to the lack of high-quality and well-annotated public datasets and comprehensive benchmarks for performance evaluation. To this end, we plan to organize a challenge based on the recent VISO dataset, and focus on the specific challenges and research problems in moving object detection and tracking in satellite videos. We hope this challenge could inspire the community to explore the tough problems in satellite video analysis, and ultimately drive technological advancement in emerging applications.
The 1st Challenge on Moving Object Detection and Tracking in Satellite Videos (SatVideoDT) at ICPR 2022 aims to facilitate the development of video object detection and tracking algorithms, and push forward research in the field of moving object detection and tracking from satellite videos. This challenge is expected to include the following three competition tracks.
Given the VISO dataset with 100 satellite videos (with 32,825 frames) captured by Jilin-1 satellite platforms, the goal of this task is to achieve moving object detection across the whole video. We will provide the training set (with 26,000 frames) and the validation set (with 3250 frames) with full bounding boxes annotations. The test set (with 3575 frames) will be also provided, but with satellite images only. The participants are expected to train their models on the training set and validate the performance on the validation set. Then, the finalized model is used to generate detection results on the test set. The final performance will be automatically evaluated by the organizers with a set of objective quantitative metrics (see Evaluation Metrics, Track 1).
Given the initial bounding box annotations of a specific object, this task requires estimating the location of the object across different frames. For this task, we will provide a subset of 100 high-quality videos (videos 1 to 100) with a total of 32,825 frames. Specifically, videos 1 to 80 will be used as the training set and videos 81 to 90 will be used as the validation set. The bounding box annotations of specific objects of each frame in the training set and validation set will be provided. The test set is composed of videos 91 to 100, and only the annotation of the first frame will be provided for initialization. The participants are expected to train their models on the training set and validate the performance on the validation set. Then, the finalized model is used to generate tracking results on the test set.
This task aims at locating multiple objects of interest, maintaining their identities, and yielding their individual trajectories across the whole video. For this task, 100 sequences (videos 1 to 100) with a total of 32,825 frames from the VISO dataset will be provided. Specifically, videos 1 to 80 will be used as the training set and videos 81 to 90 will be used as the validation set. The bounding box annotations and the instance id of each object in each frame will be provided. The test set is composed of videos 91 to 100. The participants are expected to train their models on the training set and validate the performance on the validation set. Then, the finalized model is used to generate tracking results on the test set.
This challenge is built upon our recently released VISO dataset, the first well-annotated large-scale satellite videos dataset for the task of moving object detection and tracking. The dataset is captured by the Jilin-1 satellite constellation at different positions of the satellite orbit. The recorded videos cover several square kilometers of areas in real scenes. Each image in the videos has a resolution of 12,000 × 5,000 and contains a great number of objects with different scales. Moreover, four common types of moving objects, including plane, car, ship and train, are manually labeled. An example of a labeled video is shown below:
To evaluate the detection performance of the methods submitted to the challenge, the commonly-used evaluation metrics (i.e., mAP) for object detection will be used. We report the average results over all the satellite videos in the evaluation dataset. Note that, the final results are ranked by mAP (IOU = 0.5) calculated in the test dataset.
Following the standard evaluation protocol of the visual tracking OTB dataset, all trackers will be evaluated using two metrics: Distance Precision Rate (DPR) and Overlap Success Rate (OSR). Note that, the final results are ranked according to the AUC values of Distance Precision Rate (DPR) and Overlap Success Rate (OSR) calculated by the participants in the test dataset with a ratio of 50% and 50% respectively.
The metrics in generic Multiple-Object Tracking Challenge Benchmark will be used for quantitative evaluation. The final results of multi-objective tracking will be ranked according to the Multiple Object Tracking Accuracy (MOTA) and IDF1 values calculated by participants in the test data set with a comprehensive weighting of 50% and 50% respectively.
Over the last few years, several milestone methods have been developed for satellite videos, including DSFNet and CFME. In this challenge, DSFNet is used as a detection baseline model and the submitted results should be at least on par with DSFNet. CFME is used as a single object tracking baseline model and the submitted results should be at least on par with CFME. In particular, we selected SORT as a multi object tracking baseline model. Note that, the inputs (i.e., detection results at each frame) to the baselines is used the detection results achieved by DSFNet method. The solutions with evaluation metrics values lower than these baselines will not be ranked in the leaderboard.
Method | mAP(IOU=0.5) | Score |
---|---|---|
DSFNet | 0.43 | 43 |
Method | DPR | OSR | Score |
---|---|---|---|
CFME | 0.504 | 0.282 | 39.3 |
Method | MOTA | IDF1 | Score |
---|---|---|---|
SORT | 0.418 | 0.389 | 40.4 |
We use CodaLab for online submission in the development phase. Here, we provide an example ( Track1. Track2. Track3.) to help participants to format their submissions. In the test phase, the final results and the source codes (top three participant teams) need to be submitted to email satvideodt@outlook.com. Please refer to our online website (Track1 Track2 Track3) for details of the submission rules.
Important Dates
Release part of training data; | Feb 15, 2022 |
Release of all training and validation data; | Feb 28, 2022 |
Validation server online; | Mar 15, 2022 |
Final test data release, testing server online; | Apr 20, 2022 |
Test result submission deadline; | May 10, 2022 (23:59 Pacific time) |
Fact sheet / code / model submission deadline; | May 10, 2022 (23:59 Pacific time) |
Test preliminary score release to the participants; | May 12, 2022 |
Report submission deadline (optional); | May 15, 2022 |
The organization committee of the ICPR2022 Conference will issue award certificates to the top three participant teams of each track. Teams with better grades will be invited to submit their co-written papers to the ICPR2022 Challenge for peer review. If the paper is to be accepted and published, the participating team must specify the solution and ensure the repeatability of the competition results. Co-written paper is optional and does not affect the competitor's participation in the challenge or award.
Each group cannot have more than six group members (i.e., 1 to 5 group members is OK). Each group can only submit one algorithm for final ranking.
Competition Result
SatVideoDT Challenges@ICPR'2022
Track 1: Moving object detection in satellite videos.
-
1st Place Winner:
- Team Name: CSU-MOD
- User name: CSU-MOD
- Members: Jian Yang, Zhuang Zhou, and Weilong Guo
- Affiliation: Technology and Engineering Center for Space Utilization Chinese Academy of Sciences, Key Laboratory of Space Utilization Chinese Academy of Sciences.
-
2nd Place Winner:
- Team Name: Motion King
- User name: xixiha, XC_00
- Members: Xiyu Qi, Kelong Tu,Cong Xu, Shudan Zhu, and Lai Chen
- Affiliation: China University of Geosciences (Wuhan), Aerospace Information Research Institute Chinese Academy of Sciences, and Southwest Jiaotong University
-
3rd Place Winner:
- Team Name:
- User name: JingwenH
- Members: Jingwen Huang
- Affiliation:Zhengzhou University
Track 2: Single object tracking in satellite videos.
-
1st Place Winner:
- Team Name: SkyCV
- User name: binlin
- Members: Bin Lin, Chaocan Xue, Jinlei Zheng, Limei Qin, and Ying Li
- Affiliation: Guilin University of Technology, Northwestern Polytechnical University.
-
2nd Place Winner:
- Team Name: CSU-SOT
- User name: DonDominic
- Members: Manqi Zhao
- Affiliation: Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Key Laboratory of Space Utilization, Chinese Academy of Sciences.
-
3rd Place Winner:
- Team Name: ReDConJur
- User name: Aluka
- Members: Zhenzhong Chen, Lu Ruan, Mingpeng Cui, Guanchen Ding, and Guangwei Jiang
- Affiliation: School of Remote Sensing and Information Engineering, Wuhan University
Track 3: Multiple-object tracking in satellite videos.
-
1st Place Winner:
- Team Name: CSU-MOT
- User name: xljhh
- Members: Yuhan Sun, Manqi Zhao, and Kaiyang Cao
- Affiliation: Technology and Engineering Center for Space utilization,Chinese Academy of Sciences, Key Laboratory of Space Utilization Chinese Academy of Sciences.
-
2nd Place Winner:
- Team Name: Track King
- User name: xixiha, zsd, root, whu_cccccsd
- Members: Kelong Tu, Lingyu Kong, Cong Xu, Shudan Zhu, and Shaodong Chen
- Affiliation: China University of Geosciences (Wuhan), Aerospace Information Research Institute Chinese Academy of Sciences, Southwest Jiaotong University, and Wuhan University
-
3rd Place Winner:
- Team Name: AHU MMIC
- User name: AHU MMIC
- Members: Qing Shen, Lei Liu, Zhicheng Zhao, Chenglong Li, and Yun Xiao
- Affiliation:Anhui University
@article{yin2021detecting,
title={Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark},
author={Yin, Qian and Hu, Qingyong and Liu, Hao and Zhang, Feng and Wang, Yingqian and Lin, Zaiping and An,
Wei and Guo, Yulan},
journal={IEEE Transactions on Geoscience and Remote Sensing},
year={2021},
publisher={IEEE}
}