A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

Dennis Goldschmidt*, Poramate Manoonpong, Sakyasingha Dasgupta

*Kontaktforfatter for dette arbejde

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

120 Downloads (Pure)

Resumé

Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

OriginalsprogEngelsk
Artikelnummer20
TidsskriftFrontiers in Neurorobotics
Vol/bind11
Antal sider17
ISSN1662-5218
DOI
StatusUdgivet - 2017

Fingeraftryk

Navigation
Data storage equipment
Brain
Substrates

Citer dette

@article{2085d954b94f4e44a6cb278ea349962d,
title = "A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents",
abstract = "Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.",
keywords = "Artificial intelligence, Insect navigation, Neural networks, Path integration, Reward-based learning",
author = "Dennis Goldschmidt and Poramate Manoonpong and Sakyasingha Dasgupta",
year = "2017",
doi = "10.3389/fnbot.2017.00020",
language = "English",
volume = "11",
journal = "Frontiers in Neurorobotics",
issn = "1662-5218",
publisher = "Frontiers Research Foundation",

}

A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents. / Goldschmidt, Dennis; Manoonpong, Poramate; Dasgupta, Sakyasingha.

I: Frontiers in Neurorobotics, Bind 11, 20, 2017.

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningpeer review

TY - JOUR

T1 - A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

AU - Goldschmidt, Dennis

AU - Manoonpong, Poramate

AU - Dasgupta, Sakyasingha

PY - 2017

Y1 - 2017

N2 - Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

AB - Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control-enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent's current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.

KW - Artificial intelligence

KW - Insect navigation

KW - Neural networks

KW - Path integration

KW - Reward-based learning

U2 - 10.3389/fnbot.2017.00020

DO - 10.3389/fnbot.2017.00020

M3 - Journal article

C2 - 28446872

AN - SCOPUS:85018459730

VL - 11

JO - Frontiers in Neurorobotics

JF - Frontiers in Neurorobotics

SN - 1662-5218

M1 - 20

ER -