Multi-view rendered YCB dataset for mobile manipulation

Datasæt

Beskrivelse

This dataset contains different scenarios wherein a mobile robot is approaching a set of YCB objects using both its base and arm motions. There are a total of sixteen sequences with around 100-time steps per sequence. All the sequences were generated using BlenderProc photo-realistic renderer (https://github.com/DLR-RM/BlenderProc). Eight different YCB objects were used. All these objects have a unique 6D pose, while some of the objects also have a single or multiple axes of symmetry. In each sequence maximum of three objects were randomly sampled. In addition, for each sequence, there are five views of the objects from external cameras placed between 2.5-3-5 m facing towards the objects. This dataset was used for experimental evaluation in the following ICRA 2022 paper: Naik, L., Iversen, T. M., Kramberger, A., Wilm, J., & Krüger, N. (Accepted/In press). Multi-view object pose distribution tracking for pre-grasp planning on mobile robots. In 2022 IEEE International Conference on Robotics and Automation (ICRA) IEEE. Technical details: In each sequence, the first 5 frames (0-4) contain views from external cameras while frames (5-104) provides a view of the objects visible in the robot camera as it approaches the objects. All the ground truths are provided using the 'coco' annotations format.
Dato for tilgængelighed16. feb. 2022
ForlagZenodo

Citationsformater