Doing motion planning for bin-picking with object uncertainties requires either a re-grasp of picked objects or an online sensor system. Using the latter is advantageous in terms of computational time, as no time is wasted doing an extra pick and place action. It does, however, put extra requirements on the motion planner, as the target position may change on-the-fly. This paper solves that problem by using a state adjusting Partial Observable Markov Decision Process, where the state space is modified between runs, to better fit earlier solved problems. The approach relies on a set of waypoints, containing information about which parts of the state space may contain feasible solutions. Waypoints are pushed around the state space by observing which states in the neighborhood lead to successfully solved problems. Two bin-picking scenarios are modeled with the proposed method. One scenario in which the system receives an object pose update while moving towards the place position. Another where the update includes the object type being grasped out of a fixed number of options, each class to be deposited in a different place. When an online POMDP solver is utilized, the state adjusting POMDP is improving performance by up to 28% on execution times compared to a not adjusted POMDP.
|Title of host publication||Proceedings of the 17th International Conference on Control, Automation and Systems|
|Publication status||Published - 2017|
|Event||17th International Conference on Control, Automation and Systems - Ramada Plaza, Jeju, Korea, Republic of|
Duration: 18. Oct 2017 → 21. Oct 2017
|Conference||17th International Conference on Control, Automation and Systems|
|Country||Korea, Republic of|
|Period||18/10/2017 → 21/10/2017|