Teaching a Robot the Semantics of Assembly Tasks

T. R. Savarimuthu, A. G. Buch, C. Schlette, N. Wantia, Jürgen Roßmann, D. Martinez, G. Alenya, C. Torras, Ales Ude, Bojan Nemec, Aljaz Kramberger, Florentin Worgotter, Eren Erdal Aksoy, Jérémie Papon, Simon Haller, J. Piater, N. Kruger

Research output: Contribution to journalJournal articleResearchpeer-review

280 Downloads (Pure)


We present a three-level cognitive system in a learning by demonstration context. The system allows for learning and transfer on the sensorimotor level as well as the planning level. The fundamentally different data structures associated with these two levels are connected by an efficient mid-level representation based on so-called 'semantic event chains.' We describe details of the representations and quantify the effect of the associated learning procedures for each level under different amounts of noise. Moreover, we demonstrate the performance of the overall system by three demonstrations that have been performed at a project review. The described system has a technical readiness level (TRL) of 4, which in an ongoing follow-up project will be raised to TRL 6.
Original languageEnglish
JournalIEEE Transactions on Systems, Man, and Cybernetics: Systems
Issue number5
Pages (from-to)670-692
Publication statusPublished - 1. May 2018


  • Benchmark testing
  • Context
  • Planning
  • Robot sensing systems
  • Semantics
  • Trajectory
  • Learning by demonstration (LbD)
  • object recognition
  • robotic assembly
  • vision

Cite this