Learning Objects and Grasp Affordances through Autonomous Exploration

Dirk Kraft, Renaud Detry, Nicolas Pugeault, Emre Baseski, Justus Piater, Norbert Krüger

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

We describe a system for autonomous learning of visual object representations and their grasp affordances on a robot-vision system. It segments objects by grasping and moving 3D scene features, and creates probabilistic visual representations for object detection, recognition and pose estimation, which are then augmented by continuous characterizations of grasp affordances generated through biased, random exploration. Thus, based on a careful balance of generic prior knowledge encoded in (1) the embodiment of the system, (2) a vision system extracting structurally rich information from stereo image sequences as well as (3) a number of built-in behavioral modules on the one hand, and autonomous exploration on the other hand, the system is able to generate object and grasping knowledge through interaction with its environment.
Original languageEnglish
Title of host publicationComputer Vision Systems : 7th International Conference on Computer Vision Systems, ICVS 2009 Liège, Belgium, October 13-15, 2009. Proceedings
PublisherSpringer
Publication date2009
Pages235-244
ISBN (Print)978-3-642-04666-7
DOIs
Publication statusPublished - 2009
EventInternational Conference on Computer Vision Systems, 2009 - Liège, Belgium
Duration: 13. Oct 200915. Oct 2009
Conference number: 7

Conference

ConferenceInternational Conference on Computer Vision Systems, 2009
Number7
Country/TerritoryBelgium
CityLiège
Period13/10/200915/10/2009
SeriesLecture Notes in Computer Science
Volume5815/2009
ISSN0302-9743

Fingerprint

Dive into the research topics of 'Learning Objects and Grasp Affordances through Autonomous Exploration'. Together they form a unique fingerprint.

Cite this