Home

>

Components

RPLidar A2M8

This tutorial will guide you through the process of connecting a LiDAR sensor to your Rover.

Prerequisites

2D LiDAR sensors are widely used in robotics for things such as indoor SLAM (Simultaneous localization and mapping) or safety systems.

This tutorial will guide you through the process of connecting a LiDAR sensor to your Rover and integrating it with the system. We will present the complete instructions on RPLIDAR A2M8 example.

left: Hokuyo URG-04LX; right: RPLIDAR A2M8
The steps for RPLIDAR are not complete and were not tested yet.

The steps might slightly differ for other LiDAR sensors but should be essentially similar.

After completing this tutorial, you should be able to visualize the model and data from the sensor like in the image below.

Mounting and wiring the sensor

When mounting the sensor, you should be particularly careful not to obstruct the field of view by other parts of the Rover.

We developed 3D printable models of parts that allow mounting the aforementioned sensors to the mounting plate located at the top of the robot. The files are listed here:

3D-printed parts - /documentation/3d-printed-parts

The sensor can be connected to the robot's main computer via the mounting plate USB socket.

As for the power supply, you will have to provide additional connection from robot's battery to the sensor through a 5V DC converter (if the sensor is not powered via the USB socket).

The mounted sensor should look similar to this:

Integrating the sensor with the system

The first thing you can do is to make sure your device has the correct permissions and is available at the fixed path on your system. To do this, you can add the following rule to theudev service:

Paste these lines to /etc/udev/rules.d/lidar.rules file and reload udev rules by typing:

Your device should now be available at the /dev/lidar path.

We want the sensor functionality to be available in the ROS ecosystem, so you should install a ROS package that provides a node for the sensor you are trying to integrate.

Now, create a launch file that would start the node with a fitting configuration.

Include your launch file in the robot.launch file, so that your node will start at boot.

In /etc/ros/robot.launch:

Your robot should be aware of where the sensor is located and what space it occupies. You can ensure it does that by creating a URDF model of the sensor.

and including it in the description that is uploaded at boot.

You can experiment with the URDF file and create a more representative model of the sensor by adding more visual and collision tags or by including meshes in STL or COLLADA format.

The last step is to either reboot the robot or restart the leo service.

Reading and visualizing the data

The robot should now publish the LaserScan messages on the /scan topic. You can check the raw data that it sends by typing:

If you have ROS installed on your computer, you can get a more graphical representation of the data with RViz. If you don't have ROS, you can follow this guide:

Install ROS on your computer

/development-tutorials/ros-development/install-ros-on-your-computer

Before starting RViz, make sure you completed the Connecting other computer to ROS network section of ROS Development tutorial:

ROS Development

Now, open RViz by typing rviz in the terminal, or, if you have the leo_viz package installed, type:

This will start RViz with visualization of the current robot model.

On the Displays panel click Add -> By topic and search for the /scan topic. Choose the LaserScan display and click Ok.  You should now be able to see the data from the sensor visualized as points in 3D space.

To put the points into the camera image, you can also add the Camera display (be sure to check compressed as the Transport Hint). Here's an example end result:

Contents

Need help? Contact us - contact@fictionlab.pl