Using Relational Histogram Features and Action Labelled Data to Learn Preconditions for Means-End Actions

Severin Fichtl, Dirk Kraft, Norbert Krüger, Frank Guerin

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

The outcome of many complex manipulation ac-
tions is contingent on the spatial relationships among pairs of
objects, e.g. if an object is “inside” or “on top” of another.
Recognising these spatial relationships requires a vision system
which can extract appropriate features from the vision input
that capture and represent the spatial relationships in an easily
accessible way. We are interested in learning to predict the
success of “means end” actions that manipulate two objects at
once, from exploratory actions, and the observed sensorimo-
tor contingencies. In this paper, we use relational histogram
features and illustrate their effect on learning to predict a
variety of “means end” actions’ outcomes. The results show that
our vision features can make the learning problem significantly
easier, leading to increased learning rates and higher maximum
performance. This work is in particular important for robots
that need to reliably predict the success probability of their
multi object manipulating action repertoire in novel scenes.
Original languageEnglish
Title of host publicationIROS 2015 workshop proceedings
Number of pages6
PublisherIEEE
Publication date2. Oct 2015
Publication statusPublished - 2. Oct 2015
Event2015 IEEE/RSJ International Conference on Intelligent Robots and Systems - Hamburg, Germany
Duration: 28. Sept 20152. Oct 2015

Conference

Conference2015 IEEE/RSJ International Conference on Intelligent Robots and Systems
Country/TerritoryGermany
CityHamburg
Period28/09/201502/10/2015

Fingerprint

Dive into the research topics of 'Using Relational Histogram Features and Action Labelled Data to Learn Preconditions for Means-End Actions'. Together they form a unique fingerprint.

Cite this