A HIERARCHICAL REINFORCEMENT LEARNING-BASED APPROACH TO MULTI-ROBOT COOPERATION FOR TARGET SEARCHING IN UNKNOWN ENVIRONMENTS

Yifan Cai, Simon X. Yang, and Xin Xu

Keywords

Hierarchical reinforcement learning, multi-robot cooperation, com-pletely unknown environment, target searching

Abstract

Effective cooperation of multi-robots in unknown environments is essential in many robotic applications, such as environment exploration and target searching. In this paper, a MAXQ and Option combined hierarchical reinforcement learning approach, together with a multi-agent cooperation strategy, are proposed for the real-time cooperation of multi-robots in completely unknown environments. Unlike other algorithms that need an explicit environment model or selected parameters by trial and error, the proposed cooperation method obtains all the required parameters automatically through learning. By integrating segmental options with the MAXQ algorithm, a multi-agent cooperation strategy is designed. In new tasks, the designed cooperation method can control the multi-robot system to complete the task effectively. The simulation results demonstrate that the proposed scheme is able to effectively and efficiently guide a team of robots to cooperatively accomplish target searching tasks in completely unknown environments.

Important Links:

Go Back