An Adaptable Robot Vision System Performing Manipulation Actions with Flexible Objects

Leon Bodenhagen, Andreas Rune Fugl, Andreas Jordt, Morten Willatzen, Knud Aulkjær Andersen, Martin Mølbach Olsen, Reinhard Koch, Henrik Gordon Petersen, Norbert Krüger

Research output: Contribution to journalJournal articleResearchpeer-review

639 Downloads (Pure)

Abstract

This paper describes an adaptable system which is able to perform manipulation operations (such as Peg-in-Hole or Laying-Down actions) with flexible objects. As such objects easily change their shape significantly during the execution of an action, traditional strategies, e.g., for solve path-planning problems, are often not applicable. It is therefore required to integrate visual tracking and shape reconstruction with a physical modeling of the materials and their deformations as well as action learning techniques. All these different submodules have been integrated into a demonstration platform, operating in real-time. Simulations have been used to bootstrap the learning of optimal actions, which are subsequently improved through real-world executions. To achieve reproducible results, we demonstrate this for casted silicone test objects of regular shape.

Original languageEnglish
JournalIEEE Transactions on Automation Science and Engineering
Volume11
Issue number3
Pages (from-to)749 - 765
ISSN1545-5955
DOIs
Publication statusPublished - Jul 2014

Keywords

  • 3D-modeling
  • Action learning
  • deformation modeling
  • flexible objects
  • shape tracking

Fingerprint

Dive into the research topics of 'An Adaptable Robot Vision System Performing Manipulation Actions with Flexible Objects'. Together they form a unique fingerprint.

Cite this