M-TARE: Multi-robot Exploration Planner

(preliminary release)

M-TARE is a multi-robot version of the TARE planner. Under the same hierarchical framework as TARE, M-TARE utilizes the coarse global-level map for coordinating multiple robots to achieve computational, bandwidth, and exploration efficiency. In particular, robots running the M-TARE planner share a coarsely discretized grid map among each other to keep track of explored and unexplored areas. To overcome communication constraints, robots employ a "pursuit" strategy to approach other robots when a potential information exchange can lead to a faster exploration overall and maximize the probability of approaching the other robots during the pursuit. The figures below illustrate the M-TARE framework and the "pursuit" strategy. Here is the M-TARE Planner repository.

Illustration of the M-TARE framework. The coarse global map (solid squares) representing unexplored regions are allocated among robots by solving a Vehicle Routing Problem (VRP).

Illustration of the "pursuit" strategy that a robot can employ to approach another robot for communication. During the "pursuit", a robot visits the other robot's global subspaces in a certain order to maximize the probability of meeting that robot.

An early version of M-TARE planner was used by the CMU-OSU Team in attending DARPA Subterranean Challenge.

Final competition result from DARPA Subterranean Challenge in Louisville Mega Cavern, KY. The three robots made the most complete exploration (26 out of 28 sectors) among all teams, winning a "Most Sectors Explored Award".

Quick Start

Install Dependencies

M-TARE can be quickly set up and tested using Docker. Make sure Docker, Docker Compose, NVIDIA Container Toolkit, and Nvidia GPU Driver (last two for computers with NVIDIA GPUs) are installed on your computer. Instructions for installation can be found here and here. A brief installation instruction is provided later on this page. To verify the installation, use the following commands.

docker -v 

>>>>> Docker version xx.x.x, build xxxxxxx


docker compose version

>>>>> Docker Compose version vx.x.x

For computers with NVIDIA GPUs:

docker run --gpus all --rm nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi

>>>>> Sat Dec 16 17:27:17 2023       

+-----------------------------------------------------------------------------+

| NVIDIA-SMI 525.125.06   Driver Version: 525.125.06   CUDA Version: 12.0     |

|-------------------------------+----------------------+----------------------+

| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |

| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |

|                               |                      |               MIG M. |

|===============================+======================+======================|

|   0  NVIDIA GeForce ...  Off  | 00000000:01:00.0  On |                  N/A |

| 24%   50C    P0    40W / 200W |    918MiB /  8192MiB |      3%      Default |

|                               |                      |                  N/A |

+-------------------------------+----------------------+----------------------+

                                                                               

+-----------------------------------------------------------------------------+

| Processes:                                                                  |

|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |

|        ID   ID                                                   Usage      |

|=============================================================================|

+-----------------------------------------------------------------------------+

Install additional packages.

sudo apt install tmux tmuxp net-tools

Get the Docker Image and Launch Scripts

Pull the docker image.

docker pull caochao/mtare-open-source:latest

Clone the launch scripts.

git clone https://github.com/caochao39/mtare_docker.git

We recommend using our custom tmux configuration. Copy the file to the home folder.

cd mtare_docker

cp .tmux.conf ~/

Set up the Network

To run multiple simulated robots on a single computer

To run multiple simulated robots across different computers

When connecting computers to a local network via a switch or a direct cable, we recommend assigning a static IP to each computer manually. In cases where a router is part of the local network setup, you can use either static IP or automatic IP addressing (DHCP) based on the router's settings.

Launch Robots

In a terminal, use the following command to allow docker to connect to the X server to show the RVIZ GUI (Can be undone by xhost - afterward).

xhost +

Now you can run a script to launch a robot. Here assume the name of the Ethernet interface is eno1. Please change the name of the Ethernet interface accordingly.

./run_mtare.sh tunnel 30 2 0 eno1

The last command does the following:

The requirements for the arguments are given below.

Usage: run_mtare.sh <envrionment> <comms_range> <robot_num> <robot_id> <network_interface>

  - <envrionment>: The environment to explore, which should be one of the following: tunnel, garage, campus, indoor, forest

  - <comms_range>: Communication range in meters, two robots further than this range cannot communicate with each other

  - <robot_num>: Total number of robots

  - <robot_id>: Robot id, ranging from 0 to robot_num - 1

  - <network_interface>: Name of the network interface to use, e.g. eth0

In another terminal (this can be on a different computer that went through the same setup steps above):

./run_mtare.sh tunnel 30 2 1 eno1

To launch more than 2 robots, simply run ./run_mtare.sh in new terminals. Make sure to supply the same robot_num argument to all launches. The robot_ids need to range from 0 to robot_num - 1. For example, to launch 5 robots:

In terminal 1:

./run_mtare.sh tunnel 30 5 0 eno1

In terminal 2:

./run_mtare.sh tunnel 30 5 1 eno1

In terminal 3:

./run_mtare.sh tunnel 30 5 2 eno1

In terminal 4:

./run_mtare.sh tunnel 30 5 3 eno1

In terminal 5:

./run_mtare.sh tunnel 30 5 4 eno1

Note: To verify that the robots can communicate with each other, start the robots at the same time and observe that the circles indicating the communication range turn green.

Stop Robots

In a separate terminal, use the script mtare_docker/stop.sh to stop the robots on a computer. This script will kill relevant tmux sessions, stop and remove the docker containers. The usage is as follows:

Visualization

If the launch is successful, RVIZ will pop up to show the exploration process (one RVIZ for each robot). The visual elements in RVIZ have similar settings as in the single-robot exploration. In addition, the following new elements are specific to the multi-robot case:

Communication Status

The figures below show three communication statuses between the two robots:

Out-of-comms (red circle)

Pursuit (yellow circle)

In-comms (green circle)

Docker Installation

1) For computers without a Nvidia GPU

Install Docker and grant user permission.

curl https://get.docker.com | sh && sudo systemctl --now enable docker

sudo usermod -aG docker ${USER}

Make sure to restart the computer, then install additional packages.

sudo apt update && sudo apt install mesa-utils libgl1-mesa-glx libgl1-mesa-dri

2) For computers with Nvidia GPUs

Install Docker and grant user permission.

curl https://get.docker.com | sh && sudo systemctl --now enable docker

sudo usermod -aG docker ${USER}

Make sure to restart the computer, then install Nvidia Container Toolkit (Nvidia GPU Driver should be installed already).

curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor \

  -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \

  && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list \

  | sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' \

  | sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list

sudo apt update && sudo apt install nvidia-container-toolkit

Configure Docker runtime and restart Docker daemon.

sudo nvidia-ctk runtime configure --runtime=docker

sudo systemctl restart docker

Note: If your computer does not have a Nvidia GPU but has Nvidia GPU drivers installed, please uninstall them.

Simulation Results

We investigated how the exploration time efficiency increased with more robots deployed in the environment. In addition, we adapted the M-TARE framework to implement different coordination strategies for exploration under communication constraints. We investigated four communication strategies described in the following.

All four communication strategies were realized under the M-TARE framework. We conducted experiments to evaluate the four strategies in the five simulation environments, shown in the figures below. The number of robots deployed for exploration ranged from 2 to 20 at a step size of 2. We ran 10 trials for each combination of environment, number of robots, and communication strategy, resulting in 2000 trials in total. The distance range for communication was set to 30 meters to resemble a realistic wireless connection range.

Campus

Indoor

Garage

Tunnel

Forest

Furthermore, we investigated the influence of communication range on exploration efficiency, as shown in the figures below. We conducted experiments in the tunnel environment using varying communication ranges ranging from 10 to 300 meters while maintaining a constant number of robots, namely 5, 10, 15, and 20. Each configuration of the experiment was run for 10 trials, which resulted in more than 1400 trials in total.

5 robots

10 robots

15 robots

20 robots

Network Setup for Hardware Deployment

The procedure of deploying M-TARE on physical robots is highly hardware-dependent. Here we only give high-level guidance on the wireless network setup. Typically, a wireless ad hoc network is involved for inter-robot communication, where each robot is equipped with a radio node that connects to the network. Compared to a traditional centralized network, a wireless ad hoc network does not require a central router or wireless access point, which allows robots to establish a direct point-to-point connection when they are within the wireless communication range. 

In most instances, a computer running M-TARE connects to a radio node via an Ethernet cable. This radio node then establishes a wireless connection to the ad hoc network, i.e., other robots. For the computer, this setup simulates a direct Ethernet cable connection to another computer, and thus the steps above on the network setup can be applied.

More Information

The Docker image used in the quick start section contains code from the following four packages:

To modify the code for custom purposes, users need to rebuild the code within a Docker container.  Here are instructions to build ros1_bridge from source with custom messages. To update the Docker image with modified codes, users can commit the changes to a new Docker image, tag the image, and push the image to Docker Hub for future use (detailed instructions). We recommend starting by modifying our Docker image to keep the functional configurations instead of creating one from scratch.

People

Chao Cao
CMU Robotics Institute

Zhongqiang Ren
Shanghai Jiao Tong University

Henrik Christensen
UC San Diego

Stephen Smith
CMU Robotics Institute

Howie Choset
CMU Robotics Institute

Ji Zhang
CMU NREC & Robotics Institute

References

C. Cao, Z. Ren, H. Choset, and J. Zhang. Efficient, Heterogeneous Multi-robot Exploration with Maximum-likelihood Pursuit Strategy. Submitted.

C. Cao, H. Zhu, Z. Ren, H. Choset, and J. Zhang. Representation Granularity Enables Time-Efficient Autonomous Exploration in Large, Complex Worlds. Science Robotics. vol. 8, no. 80, 2023. [PDF] [Summary Video]

Credits

Google OR-Tools library is from open-source release.

ros1_bridge is used to convert messages between ROS1 and ROS2 for inter-robot communication.

Notes

TARE is named after the efforts to develop Technologies for Autonomous Robot Exploration.