Towards Crossmodal Learning for Smooth Multimodal Attention Orientation

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Abstract

Orienting attention towards another person of interest is a fundamental social behaviour prevalent in human-human interaction and crucial in human-robot interaction. This orientation behaviour is often governed by the received audio-visual stimuli. We present an adaptive neural circuit for multisensory attention orientation that combines auditory and visual directional cues. The circuit learns to integrate sound direction cues, extracted via a model of the peripheral auditory system of lizards, with visual directional cues via deep learning based object detection. We implement the neural circuit on a robot and demonstrate that integrating multisensory information via the circuit generates appropriate motor velocity commands that control the robot’s orientation movements. We experimentally validate the adaptive neural circuit for co-located human target and a loudspeaker emitting a fixed tone.
Original languageEnglish
Title of host publicationSocial Robotics : Proceedings of the 10th International Conference, ICSR 2018
EditorsShuzhi Sam Ge, John-John Cabibihan, Miguel A. Salichs, Elizabeth Broadbent, Hongsheng He, Alan R. Wagner, Álvaro Castro-González
PublisherSpringer
Publication date2018
Pages318-328
ISBN (Print)978-3-030-05203-4
ISBN (Electronic)978-3-030-05204-1
DOIs
Publication statusPublished - 2018
Event10th International Conference on Social Robotics - Qingdao, China
Duration: 28. Nov 201830. Nov 2018
http://uconf.org/ICSR2018/index.html

Conference

Conference10th International Conference on Social Robotics
CountryChina
CityQingdao
Period28/11/201830/11/2018
Internet address
SeriesLecture Notes in Computer Science
Volume11357
ISSN0302-9743
SeriesLecture Notes in Artificial Intelligence
Volume11357

Keywords

  • Human robot interaction
  • Neural control
  • Sensor fusion

Fingerprint Dive into the research topics of 'Towards Crossmodal Learning for Smooth Multimodal Attention Orientation'. Together they form a unique fingerprint.

Cite this