Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking

Research output: Contribution to conference without publisher/journalConference abstract for conferenceResearchpeer-review


Crossmodal sensory cue integration is a fundamental process in the brain by which stimulus cues from different sensory modalities are combined together to form an coherent and unified representation of observed events in the world. Crossmodal integration is a developmental process involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian-like temporal correlation learning-based adaptive neural circuit for crossmodal cue integration that does not require such a priori information. The circuit correlates stimulus cues within each modality as well as symmetrically across modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best possible weights required for
a weighted combination of auditory and visual spatial target directional cues. The result is directly mapped to robot wheel velocities to elicit a multisensory orientation response. Trials in simulation demonstrate that concurrent unimodal learning improves both the overall accuracy and precision of the multisensory responses of symmetric crossmodal learning.
Original languageEnglish
Publication date2018
Number of pages1
Publication statusPublished - 2018
Event19th Annual International Multisensory Research Forum - The Chestnut Residence and Conference Center, Toronto, Canada
Duration: 14. Jun 201817. Jun 2018


Conference19th Annual International Multisensory Research Forum
LocationThe Chestnut Residence and Conference Center
Internet address


Dive into the research topics of 'Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking'. Together they form a unique fingerprint.

Cite this