Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

Research output: Contribution to conference without publisher/journalPaperResearchpeer-review

Abstract

Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integration, with neuroplasticity as its underlying mechanism. Bayesian causal inference framework dictates a weighted combination of stimulus estimates to achieve optimal crossmodal integration, but assumes knowledge of the underlying stimulus noise distributions. We present a Hebbian-like correlation learning-based model that continuously adapts crossmodal combinations in response to dynamic changes in noisy sensory stimuli but does not require a priori knowledge of sensory noise. The model correlates sensory cues within a single modality as well as across modalities to independently update modality-specific neural weights. This model is instantiated as a neural circuit that continuously learns the best possible weights required for a weighted combination of noisy low-level auditory and visual spatial target direction cues. The combined sensory information is directly mapped to wheel velocities that orient a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response.
Original languageEnglish
Publication date2017
Number of pages4
Publication statusPublished - 2017
Event7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics: Workshop on Computational Models of Crossmodal Learning - Instituto Superior Tecnico, Lisbon, Portugal
Duration: 18. Sep 201721. Sep 2017
https://www2.informatik.uni-hamburg.de/wtm/WorkshopCrossmodalLearning2017/index.php

Conference

Conference7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics
LocationInstituto Superior Tecnico
CountryPortugal
CityLisbon
Period18/09/201721/09/2017
Internet address

Fingerprint Dive into the research topics of 'Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking'. Together they form a unique fingerprint.

Cite this