Multi-view object pose distribution tracking for pre-grasp planning on mobile robots

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

64 Downloads (Pure)

Abstract

The ability to track the 6D pose distribution of an object when a mobile manipulator robot is still approaching the object can enable the robot to pre-plan grasps that combine base and arm motion. However, tracking a 6D object pose distribution from a distance can be challenging due to the limited view of the robot camera. In this work, we present a framework that fuses observations from external stationary cameras with a moving robot camera and sequentially tracks it in time to enable 6D object pose distribution tracking from a distance. We model the object pose posterior as a multi-modal distribution which results in a better performance against uncertainties introduced by large camera-object distance, occlusions and object geometry. We evaluate the proposed framework on a simulated multi-view dataset using objects from the YCB data set. Results show that our framework enables accurate tracking even when the robot camera has poor visibility of the object.

Original languageEnglish
Title of host publication2022 International Conference on Robotics and Automation (ICRA)
PublisherIEEE
Publication date2022
Pages1554-1561
ISBN (Electronic)9781728196817
DOIs
Publication statusPublished - 2022
Event39th IEEE International Conference on Robotics and Automation, ICRA 2022 - Philadelphia, United States
Duration: 23. May 202227. May 2022

Conference

Conference39th IEEE International Conference on Robotics and Automation, ICRA 2022
Country/TerritoryUnited States
CityPhiladelphia
Period23/05/202227/05/2022
SponsorIEEE, IEEE Robotics and Automation Society
Series IEEE International Conference on Robotics and Automation
ISSN2152-4092

Fingerprint

Dive into the research topics of 'Multi-view object pose distribution tracking for pre-grasp planning on mobile robots'. Together they form a unique fingerprint.

Cite this