In this tutorial, we will show you how to perform SLAM (Simultaneous localization and mapping) and autonomous navigation on Leo Rover equipped with IMU and LiDAR sensors.
For this purpose, we prepared the leo_navigation package which makes use of many other packages available in ROS to provide autonomous capabilities. You won't have to write a single line of code, but some configuration may need to be tweaked to work best in your environment.
To complete this guide, you will need to, first of all, have the IMU and LiDAR sensors integrated with the system:
and
or
Make sure you are operating on the newest image for the Raspberry Pi and you are up-to-date with the packages and have the newest firmware flashed.
You will also need to have ROS installed on your computer and some previous experience with ROS is also recommended.
There are a few things you will need to prepare on Leo Rover and on your computer, in order to use the software for autonomous navigation.
You will need to build the leo_navigation package first. To do this, start by accessing a remote terminal session on Leo Rover by logging via ssh:
Create a catkin workspace:
Clone the leo_navigation package to the source space:
Use rosdep to install dependencies:
Build the workspace:
Source the devel space:
It is also advisable to check whether the sensors are correctly integrated into the system. Make sure the sensor-related data is published on the correct topics:
You will also need to have the transforms, from base_link to frame ids associated with the sensor messages, available in the tf tree.
If you see any errors, it probably means you omitted the part about extending the URDF model when integrating the sensor.
In this guide, your computer will be used for visualizing the data processed on the rover and for sending navigation goals. For this purpose, you will need to install the leo_description package which contains the robot model and the leo_viz package which contains configuration files for RViz.
You can build them using the instructions from this chapter. You can also download the prebuilt packages from the ROS repository by executing:
You will also need to properly configure your computer to communicate in the ROS network, so be sure you have correctly set these environment variables:
For more information, visit:
Now that everything is ready, we can proceed to launching some software. This whole section will describe in detail the functionalities provided in the leo_navigation package and how to use them.
Here's a brief summary of different parts of the navigation software we will run:
All of the functions are provided through launch files that are located under the launch/ directory. All parameters for the running nodes are loaded from YAML files located under the config/ directory.
To estimate the robot's position from wheel encoders and IMU, we will use the robot_localization package which contains two state estimation nodes: the ekf_localization_node which implements Extended Kalman Filter algorithm and the ukf_localization_node which implements Unscented Kalman Filter. We will use the first option as it is less computationally expensive but the latter is a more stable alternative.
Without further ado, let's try running the localization software and see the result. Later, we will explain what is actually going on.
On Leo Rover, type:
On your computer, type:
The RViz instance should start and you should see the robot's model. Now, try to steer the rover (either from the Web UI or by using a joystick). If everything works, you should see the model moving along the ground plane like in this video:
Here's a diagram that illustrates the nodes launched by the odometry.launch file and the connections between them:
The state estimation node requires that the input topics are stamped messages which contain covariance matrices. That's why the leo_navigation package provides the message_filter node which:
Apart from these topics, the ekf_localization_node listens for the base_link->imu tf transform to transform IMU readings to a common reference frame (base_link in this case).
The current state estimation is published on the /odometry/filtered topic and the pose is also broadcasted as the odom->base_link transform. That's why having the Fixed Frame set to odom in RViz, you can see the model moving.
Notice that we didn't use much from the measurements returned by the IMU sensor, except for the angular velocity which was used to correct the wheel odometry.
Thanks to the imu_filter_madgwick package, we can get pretty good orientation estimate, which, combined with the wheel odometry, can be used to track the robot's position in 3D. The package uses the Madgwick's filter to fuse data from a gyroscope, an accelerometer and (optionally) a magnetometer.
To try the robot's state estimation in 3D, start the same launch file as in the previous example, but with the three_d argument set to true:
Now, try steering the rover on uneven terrain and observe the result in RViz.
The diagram for the odometry.launch file now changes a little:
We now have the /imu_filter_node which takes messages from the /imu/data_raw topic (gyroscope and accelerometer data), calculates the orientation of the sensor and publishes the data with added orientation quaternion on the /imu/data topic. The node can also optionally incorporate the /imu/mag topic (magnetometer data).
The /ekf_localization_node now uses different configuration which takes the data from /imu/data topic and uses the orientation quaternion in the state estimation.
For the /message_filter node, there is the config/message_filter.yaml file in which you can change the values on the diagonals of the covariance matrices, which represent the variances (the uncertainty) of the measurements.
The configuration files for the /ekf_localization_node can be found in the config/ekf_localization_node/ directory. There are two files: ekf_2d.yaml and ekf_3d.yaml. The appropriate file is chosen depending on the value of 3d argument passed to the launch file.
A full description of parameters for the state estimation node can be found on the robot_localization documentation and in the ekf config template.
There is also the config/imu_filter_node.yaml file for the Madgwick's filter. The parameters are explained in the imu_filter_madgwick Wiki page. If you want to use the magnetometer data, just change use_mag parameter to true.
If you have set the use_mag parameter and want use the heading information (compass) in /ekf_localization_node, just modify the imu0_config parameter in the config/ekf_localization_node/ekf_3d.yaml file, so that the yaw value is fused into the state estimate:
SLAM (Simultaneous Localization and Mapping) is a problem of constructing a map of the environment while simultaneously keeping track of the robot within it. It is hard because a map is needed for localization and a good pose estimate is needed for mapping, so this appears to be chicken-and-egg problem. There are, however, methods of solving it approximately with the use of probabilistic algorithms, such as the particle filter.
One such approach, called GMapping, can be used to perform SLAM with the laser range data (LiDAR scans) and a local odometry source. The algorithm has its ROS wrapper node in the gmapping package. To run it, just type:
On your computer, close any running RViz instance and open a new one with the slam configuration:
Now try exploring the terrain with your rover and see the map being created. You will probably experience the model doing discrete jumps on the map from time to time. This is due to gmapping correcting robot's position within the map. Here's a video demonstration:
The diagram for the gmapping.launch file is pretty straightforward:
The /slam_gmapping node takes as the input:
As the output, it returns:
If you already explored all the terrain you wanted to map, the /gmapping node can draw unnecessary resources for its mapping part. When map is no longer updated, it is more efficient to just use an algorithm that will only track the robot's position against the map you have generated.
That's where the amcl package comes in handy. It implements the Monte Carlo localization algorithm for laser range data. To use it, first, save the current map generated by GMapping using the map_saver script:
This will create 2 files: mymap.yaml containing map's metadata and mymap.pgm containing the actual occupancy grid in binary format.
Close the gmapping.launch instance by clicking Ctrl+C on the terminal session it is running on. Then, start amcl.launch by typing:
Replace <path_to_the_map_file> with the absolute path to the mymap.yaml file. If the file is in your current working directory, you can instead type:
If your initial pose is far from the real one, AMCL might not be able to correctly localize the robot on the map. To fix this, you can use the 2D Pose Estimate tool in RViz to manually set the initial pose.
When your initial pose is close enough to the real one, try steering the robot and notice how the position of the robot model in RViz is being corrected by AMCL.
Similar to the /slam_gmapping node, /amcl takes the laser range data (/scan topic) as input and provides the odometry drift information by broadcasting the map->odom transform. The difference is that instead of publishing to the /map topic, it receives the (previously generated) map from the /map_server node.
You can also notice 2 additional topics published by the /amcl node:
You can visualize the data published on these topics by enabling the AMCL Pose and/or the AMCL Particle Cloud displays in RViz.
The configuration for the /slam_gmapping node is loaded from the config/slam_gmapping.yaml file. For description of each parameter, visit the gmapping ROS wiki page.
Here are some parameters you can try to adjust:
Similarly, for the /amcl node, there is the config/amcl.yaml file with the parameters that are described in the amcl ROS wiki page.
Here are some parameters worth noticing:
Now that you have the map of the terrain and your rover can localize itself within it (with either gmapping or amcl), you can try to make the robot drive autonomously to the designated position on the map.
To do this, we will utilize 2 packages:
To start these two nodes on you rover, just type:
On your computer, close any running RViz instance and start a new one with the navigation configuration:
Now, to send a navigation goal from RViz, select the 2D Nav Goal tool from the toolbar located at the top, then click somewhere on the map to set position or click and drag to set position and orientation.
If the goal is achievable, you should see the planned path in RViz and the navigation software should start sending commands to the robot. Make sure no other node is publishing velocity commands (/cmd_vel topic).
Here's a diagram for the navigation.launch file:
In the configuration we are using, the /move_base node's inputs include:
The /move_base node can receive navigation goals in 2 ways:
When a navigation goal is being executed, the node publishes velocity command for the rover on the /nav_cmd topic.
The /twist_mux node chooses which topic with velocity commands to forward to the /cmd_vel topic. If no messages are sent on /ui_vel and /joy_vel topics, the /nav_cmd will be selected. To utilize the other topics when using a joystick or the Web UI, you need to make sure the velocity commands from these components are published on corresponding topics.
For the joysick, if you followed the example from the ROS Development tutorial, you can choose where to publish the commands by specifying the cmd_vel_topic argument to the launch file:
For the Web UI, you need to change cmd_vel to ui_vel in the /opt/leo_ui/js/leo.js file on your rover. You might also need to clear the cache data on your browser.
Up to now, we treated the /move_base node as a black box to which we send navigation goals and get velocity commands in return. This is fine to start with, but if you want to fine-tune configuration for your environment, you might need to know about various components of move_base and how they work with each other. This will help you to understand the meaning behind the parameters.
move_base uses the costmap_2d package to generate layered 2D costmaps that are used by the planners to plan and execute a collision-free path to the navigation goal. Our configuration defines 3 layers:
There are 2 costmaps being used to store information about the environment:
move_base uses 2 planners that cooperate with each other to accomplish navigation goals:
The last bit worth mentioning are the recovery behaviors. When the robot perceives itself as stuck, move_base will try to perform some actions, in order to clear out space. The default recovery behaviors will perform the following actions:
If the robot is still stuck after performing these actions, move_base will abort the navigation goal.
As the /move_base node uses components that have their own ROS APIs, the configuration has been split into multiple files to make it more modular. There is a total of 6 files under the config/move_base/ directory. Here is a short description of each file and some parameters you can try to adjust:
move_base.yaml – parameters read directly by the /move_base node.
costmaps/costmap_common.yaml – parameters common to the global and local costmaps. These include the footprint polygon and the costmap layers configuration.
costmaps/global_costmap.yaml – parameters loaded by the global costmap.
costmaps/local_costmap.yaml – parameters loaded by the local costmap.
planners/global_planner.yaml – parameters loaded by the Global Planner.
planners/local_planner.yaml – parameters loaded by the Local Planner.
Most of these parameters are available through dynamic_reconfigure so you can modify them at runtime before changing the configuration files, e.g. by running the rqt_reconfigure tool on your computer: