Robotic systems are today still mostly unable to perform complex tasks in unknown environments. While there have been many approaches to cope with unknown environments, for example in mobile robot navigation, the work done when it comes to more complex tasks, for example object handling are, less developed.
This work presents a system that is able to learn autonomously about objects and applicable grasps in an unknown environment through exploratory manipulation and to then use this grounded knowledge in a planning setup to address complex tasks. A set of different subsystems is needed to achieve this goal. Therefore we present in this work a set of components and how they interact with each other.
These components and the information about integration is described in a collection of papers which form the core of this dissertation. These papers are sorted into relevant chapters. Each chapter contains an introduction to position the presented papers within the whole system. To give an overall picture of the work, a common introduction to motivate the work and place
it in relation to other work is given. A joint conclusion and summary which includes a look at possible extensions is also provided. The topics are ordered so that we proceed from the more general integration works towards the works
describing the individual components.
The first chapter gives an overview over the system that is able to learn a grounded visual object representation and a grounded grasp representation. In the following part, we describe how this grounding procedures can be embedded in a three cognitive level architecture. Our initial work to use a tactile sensor to enrichen the object representations as well as allow for more complex actions is presented here as well.
Since our system is concerned with learning about for example unknown objects, we need to establish that something is an object (and not, an obstacle). One of the initial steps there is to see if we can manipulate the object. We therefore present work that describes how to achieve physical control over an object. This work uses a feature-action relationship. We also explain how the feature-action relationship can be improved through learning from a set of experiences.
Once physical control is achieved, we can move the object in such a way that we can gather visual information from different viewpoints. We describe how this information can be integrated into a single object representation. The gathered representations can finally be used in a system that is able to execute different plans. We present a system that is able to generate plans for simple task like cleaning up a table. The system is able to gather required information (sensing actions) and has a plan monitoring component to detect problems during the plan execution.
|Publisher||Syddansk Universitet. Det Tekniske Fakultet|
|Number of pages||136|
|Publication status||Published - 2009|