Identifying relevant feature-action associations for grasping unmodelled objects

Mikkel Tang Thomsen, Dirk Kraft, Norbert Krüger

Research output: Contribution to journalJournal articleResearchpeer-review

312 Downloads (Pure)


Action affordance learning based on visual
sensory information is a crucial problem within the development
of cognitive agents. In this paper, we present
a method for learning action affordances based on basic
visual features, which can vary in their granularity, order
of combination and semantic content. The method
is provided with a large and structured set of visual features,
motivated by the visual hierarchy in primates and
finds relevant feature action associations automatically.
We apply our method in a simulated environment on
three different object sets for the case of grasp affordance
learning. For box objects, we achieve a 0.90 success
probability, 0.80 for round objects and up to 0.75 for
open objects, when presented with novel objects. In this
work, we in particular demonstrate the effect of choosing
appropriate feature representations. We demonstrate a
significant performance improvement by increasing the
complexity of the perceptual representation. By that,
we present important insights in how the design of the
feature space influences the actual learning problem.
Original languageEnglish
JournalPaladyn. Journal of Behavioral Robotics
Issue number1
Pages (from-to)85-110
Publication statusPublished - Mar 2015

Cite this