Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

Publikation: Konferencebidrag uden forlag/tidsskriftPaperForskningpeer review

Resumé

Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integration, with neuroplasticity as its underlying mechanism. Bayesian causal inference framework dictates a weighted combination of stimulus estimates to achieve optimal crossmodal integration, but assumes knowledge of the underlying stimulus noise distributions. We present a Hebbian-like correlation learning-based model that continuously adapts crossmodal combinations in response to dynamic changes in noisy sensory stimuli but does not require a priori knowledge of sensory noise. The model correlates sensory cues within a single modality as well as across modalities to independently update modality-specific neural weights. This model is instantiated as a neural circuit that continuously learns the best possible weights required for a weighted combination of noisy low-level auditory and visual spatial target direction cues. The combined sensory information is directly mapped to wheel velocities that orient a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response.
OriginalsprogEngelsk
Publikationsdato2017
Antal sider4
StatusUdgivet - 2017
Begivenhed7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics: Workshop on Computational Models of Crossmodal Learning - Instituto Superior Tecnico, Lisbon, Portugal
Varighed: 18. sep. 201721. sep. 2017
https://www2.informatik.uni-hamburg.de/wtm/WorkshopCrossmodalLearning2017/index.php

Konference

Konference7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics
LokationInstituto Superior Tecnico
LandPortugal
ByLisbon
Periode18/09/201721/09/2017
Internetadresse

Fingeraftryk

Robotics
Brain
Wheels
Networks (circuits)

Citer dette

Shaikh, D., Bodenhagen, L., & Manoonpong, P. (2017). Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking. Paper præsenteret på 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, Lisbon, Portugal.
Shaikh, Danish ; Bodenhagen, Leon ; Manoonpong, Poramate. / Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking. Paper præsenteret på 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, Lisbon, Portugal.4 s.
@conference{0c6e0830dddd4a668cdcb8901c42295b,
title = "Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking",
abstract = "Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integration, with neuroplasticity as its underlying mechanism. Bayesian causal inference framework dictates a weighted combination of stimulus estimates to achieve optimal crossmodal integration, but assumes knowledge of the underlying stimulus noise distributions. We present a Hebbian-like correlation learning-based model that continuously adapts crossmodal combinations in response to dynamic changes in noisy sensory stimuli but does not require a priori knowledge of sensory noise. The model correlates sensory cues within a single modality as well as across modalities to independently update modality-specific neural weights. This model is instantiated as a neural circuit that continuously learns the best possible weights required for a weighted combination of noisy low-level auditory and visual spatial target direction cues. The combined sensory information is directly mapped to wheel velocities that orient a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response.",
author = "Danish Shaikh and Leon Bodenhagen and Poramate Manoonpong",
year = "2017",
language = "English",
note = "7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics : Workshop on Computational Models of Crossmodal Learning, IEEE ICDL-EPIROB 2017 ; Conference date: 18-09-2017 Through 21-09-2017",
url = "https://www2.informatik.uni-hamburg.de/wtm/WorkshopCrossmodalLearning2017/index.php",

}

Shaikh, D, Bodenhagen, L & Manoonpong, P 2017, 'Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking', Paper fremlagt ved 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, Lisbon, Portugal, 18/09/2017 - 21/09/2017.

Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking. / Shaikh, Danish; Bodenhagen, Leon; Manoonpong, Poramate.

2017. Paper præsenteret på 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, Lisbon, Portugal.

Publikation: Konferencebidrag uden forlag/tidsskriftPaperForskningpeer review

TY - CONF

T1 - Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking

AU - Shaikh, Danish

AU - Bodenhagen, Leon

AU - Manoonpong, Poramate

PY - 2017

Y1 - 2017

N2 - Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integration, with neuroplasticity as its underlying mechanism. Bayesian causal inference framework dictates a weighted combination of stimulus estimates to achieve optimal crossmodal integration, but assumes knowledge of the underlying stimulus noise distributions. We present a Hebbian-like correlation learning-based model that continuously adapts crossmodal combinations in response to dynamic changes in noisy sensory stimuli but does not require a priori knowledge of sensory noise. The model correlates sensory cues within a single modality as well as across modalities to independently update modality-specific neural weights. This model is instantiated as a neural circuit that continuously learns the best possible weights required for a weighted combination of noisy low-level auditory and visual spatial target direction cues. The combined sensory information is directly mapped to wheel velocities that orient a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response.

AB - Crossmodal sensory integration is a fundamental feature of the brain that aids in forming an coherent and unified representation of observed events in the world. Spatiotemporally correlated sensory stimuli brought about by rich sensorimotor experiences drive the development of crossmodal integration, with neuroplasticity as its underlying mechanism. Bayesian causal inference framework dictates a weighted combination of stimulus estimates to achieve optimal crossmodal integration, but assumes knowledge of the underlying stimulus noise distributions. We present a Hebbian-like correlation learning-based model that continuously adapts crossmodal combinations in response to dynamic changes in noisy sensory stimuli but does not require a priori knowledge of sensory noise. The model correlates sensory cues within a single modality as well as across modalities to independently update modality-specific neural weights. This model is instantiated as a neural circuit that continuously learns the best possible weights required for a weighted combination of noisy low-level auditory and visual spatial target direction cues. The combined sensory information is directly mapped to wheel velocities that orient a non-holonomic robotic agent towards a moving audio-visual target. Simulation results demonstrate that unimodal learning enhances crossmodal learning and improves both the overall accuracy and precision of multisensory orientation response.

UR - https://www2.informatik.uni-hamburg.de/wtm/WorkshopCrossmodalLearning2017/index.php

M3 - Paper

ER -

Shaikh D, Bodenhagen L, Manoonpong P. Unimodal Learning Enhances Crossmodal Learning in Robotic Audio-Visual Tracking. 2017. Paper præsenteret på 7th Joint IEEE International Conference on Development and Learning and on Epigenetic Robotics, Lisbon, Portugal.