Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

13 Downloads (Pure)

Abstract

A method for simulation based reinforcement learning (RL) for a multi-agent system acting in a physical environment is introduced, which is based on Multi-Agent Actor-Critic (MAAC) reinforcement learning. In the proposed method, avatar agents learn in a simulated model of the physical environment and the learned experience is then used by agents in the actual physical environment. The proposed concept is verified using a laboratory benchmark setup in which multiple agents, acting within the same environment, are required to coordinate their movement actions to prevent collisions. Three state-of-the-art algorithms for multi-agent reinforcement learning (MARL) are evaluated, with respect to their applicability for a predefined benchmark scenario. Based on simulations it is shown that the MAAC method is most applicable for implementation as it provides effective distributed learning and suits well to the concept of learning in simulated environments. Our experimental results, which compare simulated learning and task execution in a simulated environment with that of task execution in a physical environment demonstrate the feasibility of the proposed concept.

Original languageEnglish
Title of host publicationProceedings of the 11th International Conference on Agents and Artificial Intelligence
EditorsAna Rocha, Luc Steels, Jaap van den Herik
Volume1: ICAART
PublisherSCITEPRESS Digital Library
Publication date2019
Pages103-109
ISBN (Electronic)9789897583506
DOIs
Publication statusPublished - 2019
Event11th International Conference on Agents and Artificial Intelligence, ICAART 2019 - Prague, Czech Republic
Duration: 19. Feb 201921. Feb 2019

Conference

Conference11th International Conference on Agents and Artificial Intelligence, ICAART 2019
CountryCzech Republic
CityPrague
Period19/02/201921/02/2019
SponsorInstitute for Systems and Technologies of Information, Control and Communication (INSTICC)

Fingerprint

Reinforcement learning
Multi agent systems

Keywords

  • Cooperative Multi-Agent Systems
  • Cooperative Navigation
  • Multi-Agent Actor-Critic
  • Multi-Agent Reinforcement Learning
  • Simulation Based Learning

Cite this

Qian, K., Brehm, R. W., & Duggen, L. (2019). Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment. In A. Rocha, L. Steels, & J. van den Herik (Eds.), Proceedings of the 11th International Conference on Agents and Artificial Intelligence (Vol. 1: ICAART, pp. 103-109). SCITEPRESS Digital Library. https://doi.org/10.5220/0007250301030109
Qian, Kun ; Brehm, Robert W. ; Duggen, Lars. / Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment. Proceedings of the 11th International Conference on Agents and Artificial Intelligence. editor / Ana Rocha ; Luc Steels ; Jaap van den Herik. Vol. 1: ICAART SCITEPRESS Digital Library, 2019. pp. 103-109
@inproceedings{bf6e182dae434f9698c4a083310a586a,
title = "Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment",
abstract = "A method for simulation based reinforcement learning (RL) for a multi-agent system acting in a physical environment is introduced, which is based on Multi-Agent Actor-Critic (MAAC) reinforcement learning. In the proposed method, avatar agents learn in a simulated model of the physical environment and the learned experience is then used by agents in the actual physical environment. The proposed concept is verified using a laboratory benchmark setup in which multiple agents, acting within the same environment, are required to coordinate their movement actions to prevent collisions. Three state-of-the-art algorithms for multi-agent reinforcement learning (MARL) are evaluated, with respect to their applicability for a predefined benchmark scenario. Based on simulations it is shown that the MAAC method is most applicable for implementation as it provides effective distributed learning and suits well to the concept of learning in simulated environments. Our experimental results, which compare simulated learning and task execution in a simulated environment with that of task execution in a physical environment demonstrate the feasibility of the proposed concept.",
keywords = "Cooperative Multi-Agent Systems, Cooperative Navigation, Multi-Agent Actor-Critic, Multi-Agent Reinforcement Learning, Simulation Based Learning",
author = "Kun Qian and Brehm, {Robert W.} and Lars Duggen",
year = "2019",
doi = "10.5220/0007250301030109",
language = "English",
volume = "1: ICAART",
pages = "103--109",
editor = "Ana Rocha and Luc Steels and {van den Herik}, Jaap",
booktitle = "Proceedings of the 11th International Conference on Agents and Artificial Intelligence",
publisher = "SCITEPRESS Digital Library",

}

Qian, K, Brehm, RW & Duggen, L 2019, Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment. in A Rocha, L Steels & J van den Herik (eds), Proceedings of the 11th International Conference on Agents and Artificial Intelligence. vol. 1: ICAART, SCITEPRESS Digital Library, pp. 103-109, 11th International Conference on Agents and Artificial Intelligence, ICAART 2019, Prague, Czech Republic, 19/02/2019. https://doi.org/10.5220/0007250301030109

Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment. / Qian, Kun; Brehm, Robert W.; Duggen, Lars.

Proceedings of the 11th International Conference on Agents and Artificial Intelligence. ed. / Ana Rocha; Luc Steels; Jaap van den Herik. Vol. 1: ICAART SCITEPRESS Digital Library, 2019. p. 103-109.

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

TY - GEN

T1 - Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment

AU - Qian, Kun

AU - Brehm, Robert W.

AU - Duggen, Lars

PY - 2019

Y1 - 2019

N2 - A method for simulation based reinforcement learning (RL) for a multi-agent system acting in a physical environment is introduced, which is based on Multi-Agent Actor-Critic (MAAC) reinforcement learning. In the proposed method, avatar agents learn in a simulated model of the physical environment and the learned experience is then used by agents in the actual physical environment. The proposed concept is verified using a laboratory benchmark setup in which multiple agents, acting within the same environment, are required to coordinate their movement actions to prevent collisions. Three state-of-the-art algorithms for multi-agent reinforcement learning (MARL) are evaluated, with respect to their applicability for a predefined benchmark scenario. Based on simulations it is shown that the MAAC method is most applicable for implementation as it provides effective distributed learning and suits well to the concept of learning in simulated environments. Our experimental results, which compare simulated learning and task execution in a simulated environment with that of task execution in a physical environment demonstrate the feasibility of the proposed concept.

AB - A method for simulation based reinforcement learning (RL) for a multi-agent system acting in a physical environment is introduced, which is based on Multi-Agent Actor-Critic (MAAC) reinforcement learning. In the proposed method, avatar agents learn in a simulated model of the physical environment and the learned experience is then used by agents in the actual physical environment. The proposed concept is verified using a laboratory benchmark setup in which multiple agents, acting within the same environment, are required to coordinate their movement actions to prevent collisions. Three state-of-the-art algorithms for multi-agent reinforcement learning (MARL) are evaluated, with respect to their applicability for a predefined benchmark scenario. Based on simulations it is shown that the MAAC method is most applicable for implementation as it provides effective distributed learning and suits well to the concept of learning in simulated environments. Our experimental results, which compare simulated learning and task execution in a simulated environment with that of task execution in a physical environment demonstrate the feasibility of the proposed concept.

KW - Cooperative Multi-Agent Systems

KW - Cooperative Navigation

KW - Multi-Agent Actor-Critic

KW - Multi-Agent Reinforcement Learning

KW - Simulation Based Learning

U2 - 10.5220/0007250301030109

DO - 10.5220/0007250301030109

M3 - Article in proceedings

VL - 1: ICAART

SP - 103

EP - 109

BT - Proceedings of the 11th International Conference on Agents and Artificial Intelligence

A2 - Rocha, Ana

A2 - Steels, Luc

A2 - van den Herik, Jaap

PB - SCITEPRESS Digital Library

ER -

Qian K, Brehm RW, Duggen L. Experimental evaluation of a method for simulation based learning for a multi-agent system acting in a physical environment. In Rocha A, Steels L, van den Herik J, editors, Proceedings of the 11th International Conference on Agents and Artificial Intelligence. Vol. 1: ICAART. SCITEPRESS Digital Library. 2019. p. 103-109 https://doi.org/10.5220/0007250301030109