TY - GEN
T1 - Multi-view object pose distribution tracking for pre-grasp planning on mobile robots
AU - Naik, Lakshadeep
AU - Iversen, Thorbjorn Mosekjaer
AU - Kramberger, Aljaz
AU - Wilm, Jakob
AU - Kruger, Norbert
N1 - Funding Information:
This work was funded by the Innovation Fund Denmark in the context of the FacilityCobot project and Volkswagen-Stiftung in the context of the ReThiCare project.
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The ability to track the 6D pose distribution of an object when a mobile manipulator robot is still approaching the object can enable the robot to pre-plan grasps that combine base and arm motion. However, tracking a 6D object pose distribution from a distance can be challenging due to the limited view of the robot camera. In this work, we present a framework that fuses observations from external stationary cameras with a moving robot camera and sequentially tracks it in time to enable 6D object pose distribution tracking from a distance. We model the object pose posterior as a multi-modal distribution which results in a better performance against uncertainties introduced by large camera-object distance, occlusions and object geometry. We evaluate the proposed framework on a simulated multi-view dataset using objects from the YCB data set. Results show that our framework enables accurate tracking even when the robot camera has poor visibility of the object.
AB - The ability to track the 6D pose distribution of an object when a mobile manipulator robot is still approaching the object can enable the robot to pre-plan grasps that combine base and arm motion. However, tracking a 6D object pose distribution from a distance can be challenging due to the limited view of the robot camera. In this work, we present a framework that fuses observations from external stationary cameras with a moving robot camera and sequentially tracks it in time to enable 6D object pose distribution tracking from a distance. We model the object pose posterior as a multi-modal distribution which results in a better performance against uncertainties introduced by large camera-object distance, occlusions and object geometry. We evaluate the proposed framework on a simulated multi-view dataset using objects from the YCB data set. Results show that our framework enables accurate tracking even when the robot camera has poor visibility of the object.
U2 - 10.1109/ICRA46639.2022.9812339
DO - 10.1109/ICRA46639.2022.9812339
M3 - Article in proceedings
AN - SCOPUS:85136332783
T3 - IEEE International Conference on Robotics and Automation
SP - 1554
EP - 1561
BT - 2022 International Conference on Robotics and Automation (ICRA)
PB - IEEE
T2 - 39th IEEE International Conference on Robotics and Automation, ICRA 2022
Y2 - 23 May 2022 through 27 May 2022
ER -