Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integration, with neuroplasticity as its underlying mechanism. Bayesian causal inference framework dictates a weighted combination of stimulus estimates to achieve optimal crossmodal integration, but assumes knowledge of the underlying stimulus noise distributions. We present a Hebbian-like correlation learning-based model that continuously adapts crossmodal combinations in response to dynamic changes in noisy sensory stimuli but does not require a priori knowledge of sensory noise. The model correlates sensory cues within a single modality as well as across modalities to independently update modality-specific neural weights. This model is instantiated as a neural circuit that continuously learns the best possible weights required for a weighted combination of noisy low-level auditory and visual spatial target direction cues. The combined sensory information is directly mapped to wheel velocities that orient a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response.
|Number of pages||4|
|Publication status||Published - 2017|
|Event||7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics: Workshop on Computational Models of Crossmodal Learning - Instituto Superior Tecnico, Lisbon, Portugal|
Duration: 18. Sep 2017 → 21. Sep 2017
|Conference||7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics|
|Location||Instituto Superior Tecnico|
|Period||18/09/2017 → 21/09/2017|