An Adaptable Robot Vision System Performing Manipulation Actions with Flexible Objects

Leon Bodenhagen, Andreas Rune Fugl, Andreas Jordt, Morten Willatzen, Knud Aulkjær Andersen, Martin Mølbach Olsen, Reinhard Koch, Henrik Gordon Petersen, Norbert Krüger

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

602 Downloads (Pure)

Abstract

This paper describes an adaptable system which is able to perform manipulation operations (such as Peg-in-Hole or Laying-Down actions) with flexible objects. As such objects easily change their shape significantly during the execution of an action, traditional strategies, e.g., for solve path-planning problems, are often not applicable. It is therefore required to integrate visual tracking and shape reconstruction with a physical modeling of the materials and their deformations as well as action learning techniques. All these different submodules have been integrated into a demonstration platform, operating in real-time. Simulations have been used to bootstrap the learning of optimal actions, which are subsequently improved through real-world executions. To achieve reproducible results, we demonstrate this for casted silicone test objects of regular shape.

OriginalsprogEngelsk
TidsskriftIEEE Transactions on Automation Science and Engineering
Vol/bind11
Udgave nummer3
Sider (fra-til)749 - 765
ISSN1545-5955
DOI
StatusUdgivet - jul. 2014

Fingeraftryk

Dyk ned i forskningsemnerne om 'An Adaptable Robot Vision System Performing Manipulation Actions with Flexible Objects'. Sammen danner de et unikt fingeraftryk.

Citationsformater