Open-Ended Continuous Learning of Compound Goals

Paresh Dhakan*, Kathryn Kasmarik, Inaki Rano, Nazmul Siddique

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Continuous learning of increasingly difficult tasks is envisaged to provide learning autonomy to robots. The tasks, however, are often human-designed or require significant external intervention. This article proposes a domain-independent goal generation mechanism to generate goals at different levels of complexity. Using a mobile robot application, we demonstrate how an agent generates compound goals by combining the state space attributes from the states that it has experienced during exploration and uses task-independent reward functions to learn the solutions to those goals. Finally, the whole process is repeated when its environment changes, thus, forming a continuous learning architecture. Results from the experiments show how the agent can combine complementary and contradictory groups of state attributes to form expressive goals and learn behaviors akin to wall following, avoiding obstacle and lane following without any previous knowledge of its environment.

Original languageEnglish
JournalI E E E Transactions on Cognitive and Developmental Systems
Volume13
Issue number2
Pages (from-to)274-285
ISSN2379-8920
DOIs
Publication statusPublished - Jun 2021

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Autonomous agent architecture
  • goal generation
  • hierarchical clustering
  • open-ended learning
  • state attribute aggregation

Fingerprint

Dive into the research topics of 'Open-Ended Continuous Learning of Compound Goals'. Together they form a unique fingerprint.

Cite this