TY - GEN
T1 - Multiview Aerial Visual Recognition (MAVREC)
T2 - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
AU - Dutta, Aritra
AU - Das, Srijan
AU - Nielsen, Jacob
AU - Chakraborty, Rajatshubhra
AU - Shah, Mubarak
PY - 2024/6
Y1 - 2024/6
N2 - Despite the commercial abundance of UAVs, aerial data acquisition remains challenging, and the existing Asia and North America-centric open-source UAV datasets are small-scale or low-resolution and lack diversity in scene contextuality. Additionally, the color content of the scenes, solar zenith angle, and population density of different geographies influence the data diversity. These factors conjointly render suboptimal aerial-visual perception of the deep neural network (DNN) models trained primarily on the ground view data, including the open-world foundational models. To pave the way for a transformative era of aerial detection, we present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives - ground camera and drone-mounted camera. MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes. This makes MAVREC the largest ground and aerial view dataset, and the fourth largest among all drone-based datasets across all modalities and tasks. Through our extensive benchmarking on MAVREC, we recognize that augmenting object detectors with ground view images from the corresponding geographical location is a superior pretraining strategy for aerial detection. Building on this strategy, we benchmark MAVREC with a curriculum-based semi-supervised object detection approach that leverages labeled (ground and aerial) and unlabeled (only aerial) images to enhance aerial detection.
AB - Despite the commercial abundance of UAVs, aerial data acquisition remains challenging, and the existing Asia and North America-centric open-source UAV datasets are small-scale or low-resolution and lack diversity in scene contextuality. Additionally, the color content of the scenes, solar zenith angle, and population density of different geographies influence the data diversity. These factors conjointly render suboptimal aerial-visual perception of the deep neural network (DNN) models trained primarily on the ground view data, including the open-world foundational models. To pave the way for a transformative era of aerial detection, we present Multiview Aerial Visual RECognition or MAVREC, a video dataset where we record synchronized scenes from different perspectives - ground camera and drone-mounted camera. MAVREC consists of around 2.5 hours of industry-standard 2.7K resolution video sequences, more than 0.5 million frames, and 1.1 million annotated bounding boxes. This makes MAVREC the largest ground and aerial view dataset, and the fourth largest among all drone-based datasets across all modalities and tasks. Through our extensive benchmarking on MAVREC, we recognize that augmenting object detectors with ground view images from the corresponding geographical location is a superior pretraining strategy for aerial detection. Building on this strategy, we benchmark MAVREC with a curriculum-based semi-supervised object detection approach that leverages labeled (ground and aerial) and unlabeled (only aerial) images to enhance aerial detection.
U2 - 10.1109/cvpr52733.2024.02140
DO - 10.1109/cvpr52733.2024.02140
M3 - Article in proceedings
T3 - Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
SP - 22678
EP - 22690
BT - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
PB - IEEE
Y2 - 16 June 2024 through 22 June 2024
ER -