Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking

Publikation: Konferencebidrag uden forlag/tidsskriftKonferenceabstrakt til konferenceForskningpeer review

Resumé

Crossmodal sensory cue integration is a fundamental process in the brain by which stimulus cues from different sensory modalities are combined together to form an coherent and unified representation of observed events in the world. Crossmodal integration is a developmental process involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian-like temporal correlation learning-based adaptive neural circuit for crossmodal cue integration that does not require such a priori information. The circuit correlates stimulus cues within each modality as well as symmetrically across modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best possible weights required for
a weighted combination of auditory and visual spatial target directional cues. The result is directly mapped to robot wheel velocities to elicit a multisensory orientation response. Trials in simulation demonstrate that concurrent unimodal learning improves both the overall accuracy and precision of the multisensory responses of symmetric crossmodal learning.
OriginalsprogEngelsk
Publikationsdato2018
Antal sider1
StatusUdgivet - 2018
Begivenhed19th Annual International Multisensory Research Forum - The Chestnut Residence and Conference Center, Toronto, Canada
Varighed: 14. jun. 201817. jun. 2018
http://imrf.info

Konference

Konference19th Annual International Multisensory Research Forum
LokationThe Chestnut Residence and Conference Center
LandCanada
ByToronto
Periode14/06/201817/06/2018
Internetadresse

Fingeraftryk

Robotics
Networks (circuits)
Brain
Wheels
Robots

Citer dette

Shaikh, D. (2018). Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking. 30-31. Abstract fra 19th Annual International Multisensory Research Forum, Toronto, Canada.
Shaikh, Danish. / Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking. Abstract fra 19th Annual International Multisensory Research Forum, Toronto, Canada.1 s.
@conference{59964a2b725f4faeb9cfe7fec471b6a6,
title = "Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking",
abstract = "Crossmodal sensory cue integration is a fundamental process in the brain by which stimulus cues from different sensory modalities are combined together to form an coherent and unified representation of observed events in the world. Crossmodal integration is a developmental process involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian-like temporal correlation learning-based adaptive neural circuit for crossmodal cue integration that does not require such a priori information. The circuit correlates stimulus cues within each modality as well as symmetrically across modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues. The result is directly mapped to robot wheel velocities to elicit a multisensory orientation response. Trials in simulation demonstrate that concurrent unimodal learning improves both the overall accuracy and precision of the multisensory responses of symmetric crossmodal learning.",
author = "Danish Shaikh",
year = "2018",
language = "English",
pages = "30--31",
note = "19th Annual International Multisensory Research Forum, IMRF 2018 ; Conference date: 14-06-2018 Through 17-06-2018",
url = "http://imrf.info",

}

Shaikh, D 2018, 'Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking' 19th Annual International Multisensory Research Forum, Toronto, Canada, 14/06/2018 - 17/06/2018, s. 30-31.

Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking. / Shaikh, Danish.

2018. 30-31 Abstract fra 19th Annual International Multisensory Research Forum, Toronto, Canada.

Publikation: Konferencebidrag uden forlag/tidsskriftKonferenceabstrakt til konferenceForskningpeer review

TY - ABST

T1 - Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking

AU - Shaikh, Danish

PY - 2018

Y1 - 2018

N2 - Crossmodal sensory cue integration is a fundamental process in the brain by which stimulus cues from different sensory modalities are combined together to form an coherent and unified representation of observed events in the world. Crossmodal integration is a developmental process involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian-like temporal correlation learning-based adaptive neural circuit for crossmodal cue integration that does not require such a priori information. The circuit correlates stimulus cues within each modality as well as symmetrically across modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues. The result is directly mapped to robot wheel velocities to elicit a multisensory orientation response. Trials in simulation demonstrate that concurrent unimodal learning improves both the overall accuracy and precision of the multisensory responses of symmetric crossmodal learning.

AB - Crossmodal sensory cue integration is a fundamental process in the brain by which stimulus cues from different sensory modalities are combined together to form an coherent and unified representation of observed events in the world. Crossmodal integration is a developmental process involving learning, with neuroplasticity as its underlying mechanism. We present a Hebbian-like temporal correlation learning-based adaptive neural circuit for crossmodal cue integration that does not require such a priori information. The circuit correlates stimulus cues within each modality as well as symmetrically across modalities to independently update modality-specific neural weights on a moment-by-moment basis, in response to dynamic changes in noisy sensory stimuli. The circuit is embodied as a non-holonomic robotic agent that must orient a towards a moving audio-visual target. The circuit continuously learns the best possible weights required for a weighted combination of auditory and visual spatial target directional cues. The result is directly mapped to robot wheel velocities to elicit a multisensory orientation response. Trials in simulation demonstrate that concurrent unimodal learning improves both the overall accuracy and precision of the multisensory responses of symmetric crossmodal learning.

M3 - Conference abstract for conference

SP - 30

EP - 31

ER -

Shaikh D. Concurrent Unimodal Learning Enhances Multisensory Responses of Symmetric Crossmodal Learning in Robotic Audio-Visual Tracking. 2018. Abstract fra 19th Annual International Multisensory Research Forum, Toronto, Canada.