Autonomous Exploration Development Environment

The environment is meant for leveraging system development and robot deployment for ground-based autonomous navigation and exploration. Containing a variety of simulation environments, autonomous navigation modules such as collision avoidance, terrain traversability analysis, waypoint following, etc, and a set of visualization tools, users can develop autonomous navigation systems and later on port those systems onto real robots for deployment. Here is our open-source repository.

Quick Start

The repository has been tested in Ubuntu 18.04 with ROS Melodic and Ubuntu 20.04 with ROS Noetic.  If using Ubuntu 22.04 with ROS2 Humble or Ubuntu 24.04 with ROS2 Jazzy , please refer to our instructions to setup the system with ROS2. Install dependencies with command lines below.

sudo apt update
sudo apt install libusb-dev

Clone the open-source repository.

git clone https://github.com/HongbiaoZ/autonomous_exploration_development_environment.git

In a terminal, go to the folder and checkout the branch that matches the computer setup. Replace 'distribution' with 'melodic' or 'noetic'. Then, compile.

cd autonomous_exploration_development_environment
git checkout distribution
catkin_make

Run a script to download simulation environments (~500MB). This may take several minutes. If the script does not start the download, users can download the simulation environments and unzip the files to  'src/vehicle_simulator/meshes'.

./src/vehicle_simulator/mesh/download_environments.sh

Source the ROS workspace and launch the system.

source devel/setup.sh
roslaunch vehicle_simulator system_garage.launch

Now, users can send a waypoint by clicking the 'Waypoint' button in RVIZ and then clicking a point to set the waypoint. The vehicle will navigate to the waypoint avoiding obstacles along the way. Note that the waypoint should be reachable and in the vicinity of the vehicle.

Alternatively, users can run a ROS node to send a series of waypoints. In another terminal, go to the folder and source the ROS workspace, then run the ROS node with the command line below. The ROS node sends navigation boundary and speed as well. The vehicle will navigate inside the boundary while following the waypoints.

roslaunch waypoint_example waypoint_example_garage.launch

Detailed Instructions

Simulation Environments

The repository contains a set of simulation environments of different types and scales. To launch the system with a particular environment, use the command line below. Replace 'environment' with the environment name, i.e. 'campus', 'indoor', 'garage', 'tunnel', and 'forest'. Now, users can use the 'Waypoint' button in RVIZ to navigate the vehicle. To view the vehicle in the environment in Gazebo GUI, set 'gazebo_gui = true' in the launch file, which is in 'src/vehicle_simulator/launch'.

roslaunch vehicle_simulator system_environment.launch

The simulation environments are kept in 'src/vehicle_simulator/meshes'. A 'preview' folder is in each environment folder, where the 'overview.png' is for a quick overview of the environment and the 'pointcloud.ply' is a point cloud with the overall map. The point cloud can be viewed in 3D processing software, e.g. CloudCompare and MeshLab. Autonomous navigation systems requiring a prior map of the environment can also utilize the point cloud.

Campus (340m x 340m): A large-scale environment as part of the Carnegie Mellon University campus, containing undulating terrains and convoluted environment layout.

Indoor Corridors (130m x 100m): Consists of long and narrow corridors connected with lobby areas. Obstacles such as tables and columns are present. A guard rail adds more difficulty to autonomous exploration.

Multi-storage Garage (140m x 130m, 5 floors): An environment with multiple floors and sloped terrains to test autonomous navigation in a 3D environment.

Tunnel Network (330m x 250m): A large-scale environment containing tunnels that form a network. This environment is provided by Tung Dang at University of Nevada, Reno.

Forest (150m x 150m): Containing mostly trees and a couple of houses in a cluttered setting.

Campus (340m x 340m)

Indoor Corridors (130m x 100m)

Multi-storage Garage (140m x 130m, 5 floors)

Tunnel Network (330m x 250m)

Forest (150m x 150m)

Autonomous Navigation Modules

Sending waypoints, navigation boundary, and speed: Upon receiving waypoint, navigation boundary, and speed messages, the system will navigate the vehicle inside the navigation boundary to the waypoint. Sending navigation boundary and speed is optional. The default speed in the system is set to 2m/s. Users can take the code in the 'waypoint_example' package as an example of sending these messages.

Collision avoidance: The collision avoidance is handled by the 'local_planner' package. The package computes collision-free paths to guide the vehicle through the environment. Motion primitives are pre-generated and loaded into the system upon start. When the vehicle navigates, the system in real-time determines the motion primitives occluded by obstacles. Those motion primitives are eliminated and the collision-free paths are selected. In the image below, the coordinate frame indicates the vehicle and the yellow dots are collision-free paths. In an autonomous navigation system, the collision avoidance module should be guided by a high-level planning module, e.g. a route planner that sends waypoints in the vicinity of the vehicle along the route. The collision avoidance module uses terrain maps from the 'terrain_analysis' package to determine terrain traversability (information below).

Yellow: Collision-free Paths

Terrain traversability analysis: The 'terrain_analysis' package analyzes the local smoothness of the terrain and associates a cost to each point on the terrain map. The system publishes terrain map messages where each message contains a set of 'pcl::PointXYZI' typed points. The x, y, and z fields of a point indicate the coordinates and the intensity field stores the cost. The terrain map covers a 10m x 10m area with the vehicle in the center. Further, the 'terrain_analysis_ext' package extends the terrain map to a 40m x 40m area. The extended terrain map keeps lidar points over a sliding window of 10 seconds with a non-decay region within 4m from the vehicle. In an autonomous navigation system, the terrain map is used by the collision avoidance module (information above) and the extended terrain map is to be used by a high-level planning module. To view the terrain map or the extended terrain map in RVIZ, click 'Panels->Displays' and check 'terrainMap' or 'terrainMapExt'.  The green points are traversable and the red points are non-traversable.

Terrain Map (10m x 10m), Green: Traversable, Red: Non-traversable

Extended Terrain Map (40m x 40m), Green: Traversable, Red: Non-traversable

Waypoint following: Upon receiving a waypoint, the system guides the vehicle to the waypoint. For information on tuning path following control, i.e. speed, yaw rate, acceleration, look-ahead distance, gains, and changing vehicle size, please refer to  Ground-based Autonomy Base Repository. 

Visualization Tools

The repository includes a set of visualization tools for users to inspect the performance of the autonomous exploration. The overall map of the environment, explored areas, and vehicle trajectory can be viewed in RVIZ by clicking 'Panels->Displays' and checking 'overallMap', 'exploredAreas', and 'trajectory'. Further, the system plots three metrics in real-time to visualize explored volume, traveling distance, and algorithm runtime, respectively. To plot the runtime, users need to send the numbers as messages on the ROS topic below.

At the end of the run, the recorded metrics are saved to a file in 'src/vehicle_simulator/log', along with the vehicle trajectory. Note that due to the heavy CPU load of Gazebo, the real-time factor is < 1 -  the simulated clock is slower than the real clock. Here, we refer to the simulated clock in recording time duration of the run since it is less affected by the clock speed on different computers. The best human practice results can be downloaded.

White: Overall Map, Blue: Explored Areas, Colored Path: Vehicle Trajectory

Exploration Metrics

Advanced

Integrating System on Real Robot

When running the system in simulation, the 'vehicle_simulator' package publishes state estimation, registered scan, and /tf messages. The scans are simulated based on a Velodyne VLP-16 Lidar and registered in the 'map' frame. These messages (listed below) currently substitute the output of the state estimation module on a real robot.

In addition, the 'sensor_scan_generation' package converts the registered scans into the frame associated with the sensor and publishes state estimation messages at the same frequency and timestamps as the scan messages. These messages (listed below) are generated from the output of the state estimation module (listed above) and do not need to be provided by the state estimation module.

The path following module in the system outputs command velocity messages to control the vehicle.

To run the system on a real robot, use the command line below which does not launch the vehicle simulator.

roslaunch vehicle_simulator system_real_robot.launch

Forward the command velocity messages from the system to the motion controller on the robot. Then, forward the output of the state estimation module on the robot to the system. Users are encouraged to use the 'loam_interface' package to bridge over the state estimation output. Please refer to our instructions to setup a compatible state estimation module from the LOAM family. For more information about system integration, please refer to Ground-based Autonomy Base Repository.

Autonomous Navigation System Diagram

Joystick-based System Debugging

The system supports using  a joystick controller to interfere with the navigation, operating in smart joystick mode. In such a mode, the vehicle is guided by an operator through a joystick controller while avoiding obstacles that the vehicle encounters.  The system is compatible with most PS3/4 and Xbox controllers with a USB or Bluetooth interface (If using the Xbox Wireless USB Adapter, please install xow). A recommended controller is the EasySMX 2.4G Wireless Controller. If using this controller model, make sure the controller is powered on and the two LEDs on top of the center button are lit, indicating the controller is in the correct mode.

During the course of autonomous navigation, pressing any button on the controller brings the system into smart joystick mode. Users can use the right joystick on the controller to navigate the vehicle. Pushing the right joystick to the front and back drives the vehicle around and pushing the right joystick to the left and right makes rotations. Holding the obstacle-check button cancels obstacle checking and clicking the clear-terrain-map button reinitializes the terrain map. To resume autonomous navigation, hold the mode-switch button and at the same time push the right joystick. The right joystick gives the speed. If only holding the mode-switch button, the system will use the speed received on the '/speed' topic in a few seconds and the vehicle will start to move. More information is available from Ground-based Autonomy Base Repository.

Compatible with Route Planner

Fast, Attemptable Route (FAR) Planner is developed by Fan Yang at CMU which uses dynamically updated visibility graph for fast replanning. The planner models the environment with polygons and builds a global visibility graph during the navigation. The planner is capable of handling dynamic obstacles and working in both known and unknown environments. In a known environment, paths are planned based on a prior map. In an unknown environment, multiple paths are attempted to guide the vehicle to goal based on the environment layout observed along with the navigation. This video shows FAR planner in action.

Please follow our instructions to try a Docker demo with FAR Planner navigation in a Unity scene.

FAR Planner in unknown environment, Blue: Vehicle trajectory, Cyan: Visibility graph, A, C: Dynamic obstacles, B, D, E, F: Deadends

FAR Planner navigation in a Unity Scene (See Docker demo instructions)

Incorporating Photorealistic Models

The system supports photorealistic environment models from Matterport3D. Please follow our instructions to setup Matterport3D environment models. The process requires converging the meshes from OBJ format to DAE format with MeshLab. To use the Matterport3D environment models, please first switch to the 'distribution-matterport' branch (replace 'distribution' with 'melodic' or 'noetic'). The system generates registered scans, RGB images, depth images, and point cloud messages corresponding to the depth images. Users can also run Habitat afterward or in parallel with the system to render RGB images, depth images, and semantic images. To conveniently setup the Conda environment to use Habitat, users can use this specification file to install all packages in the Conda environment at once.

Top: A Matterport3D environment model, Bottom: RGB, depth, and semantic images rendered by Habitat

Integrating CMU-Recon Models (Under Development)

The system can seamlessly integrate realistic models built by the CMU-Recon System. This feature is only available in Ubuntu 20.04 with ROS Noetic. CMU-Recon models are made of high-fidelity lidar scans and RGB images. To try an example CMU-Recon model, go to the development environment folder in a terminal, switch to the 'noetic-cmu-recon' branch, and then compile.

git checkout noetic-cmu-recon 

catkin_make

Run a script to download the CMU-Recon model. When prompted, enter 'A' to overwrite all existing files.

./src/vehicle_simulator/cmu_recon/download_cmu_recon_model.sh 

Now users can use the command lines below to launch the system. Wait for the system to initialize in a few seconds, rendered RGB, depth, and semantic images will show in RVIZ. To view the rendered RGB or semantic point cloud, click 'Panels->Displays' and check 'ColorCloud' or 'SemanticCloud'.

source devel/setup.sh

roslaunch vehicle_simulator system_cmu_recon_seg.launch

Top: A CMU-Recon model, Middle: Rendered RGB and semantic point clouds, Bottom: Rendered RGB, depth, and semantic images

Considerations

Ground vehicle v.s. aerial vehicle: The system is compatible with ground vehicles. To a large extend, ground-based exploration is more difficult than aerial exploration because the traversability of a ground vehicle is more limited. Aerial vehicles can move freely in the 3D space, ground vehicles have to consider terrain traversability. In other words, the same coverage made by a ground vehicle is guaranteed to be completable by an aerial vehicle carrying the same sensor. Aerial vehicles have the ability to change the altitude for more coverage. However, the strength is only meaningful in severely 3D environments where a large number of areas are not reachable by the sensor from the ground. In contrast, our focus is on the complexity of the overall geometry layout of the environments.

Sensor on gimbal: Using a gimbal to actively point the sensor improves the engineering system setup but simplifies the problem in some sense because the vehicle can produce the same coverage by moving around less. Depending on the actual usage of the gimbal, a gimbaled sensor can possibly be modeled as a fixed sensor with a larger FOV.

People

Chao Cao
CMU Robotics Institute

Hongbiao Zhu
CMU Robotics Institute

Fan Yang
CMU Robotics Institute

Howie Choset
CMU Robotics Institute

Jean Oh
CMU NREC & Robotics Institute

Ji Zhang
CMU NREC & Robotics Institute

References

C. Cao, H. Zhu, Z. Ren, H. Choset, and J. Zhang. Representation Granularity Enables Time-Efficient Autonomous Exploration in Large, Complex Worlds. Science Robotics. vol. 8, no. 80, 2023. [PDF] [Summary Video]

C. Cao, H. Zhu, F. Yang, Y. Xia, H. Choset, J. Oh, and J. Zhang. Autonomous Exploration Development Environment and the Planning Algorithms. IEEE Intl. Conf. on Robotics and Automation (ICRA). Philadelphia, PA, May 2022. [PDF] [Talk]

Credits

The code is based on Ground-based Autonomy Base Repository by Ji Zhang at CMU.

Tunnel network environment is provided by Tung Dang at University of Nevada, Reno.

velodyne_simulator and joystick_drivers packages are from open-source releases.

Links

AI Meets Autonomy: Vision, Language, and Autonomous Systems Workshop & CMU VLA Challenge

Aerial Navigation Development Environment and Air-FAR Planning Algorithm

Autonomy Stack for Macanum Wheel Platform: Containing SLAM module, base autonomy system, and route/exploration planners.

Autonomy Stack for Diablo Setup: Containing SLAM module, base autonomy system, and route/exploration planners.

Autonomy Stack for Unitree Go2: Containing SLAM module, base autonomy system, and route planner.

CMU-Recon System: Bridging reality to simulation by building realistic models of real-world environments.

Colmap-PCD: Image-to-point cloud Registration Tool.