Let’s take a closer look at a new feature for a Leo Rover – line follower.
You surely already know there are brand new features available in stock Leo Rovers as a part of the Leo Examples package. And the major one is line follower. It’s free and available to everybody so don’t hesitate to run it on your robot. But first, let’s find out more about the feature :)
A line follower is a feature that makes a robot autonomously follow a visual line embedded on the surface controlled by a feedback mechanism.
Detecting the line and steering the robot to stick to the designated track, while repeatedly correcting erroneous movements with the feedback mechanism, adds to a simple and practical closed-loop system.
Not all line followers work the same way. The one in the Leo Rover mobile robot operates based on a neural network. First, the network had to be trained to perform properly in the end. During the training, the neural network takes an image from a Leo Rover, preprocesses it in its own way so that it’s in the right size, among other things, just to be suitable for the neural network. After that, the network processes the image to finally provide 2 values that denote the linear and angular velocities. These two values are used to make a Twist message and send it to the right topic (cmd_vel). Once the whole node is turned, the neural network takes over and controls the robot. The rover drives on its own based on what it “sees”.
As mentioned above, not every line follower works the same way, but the end effect is pretty much the same in any of them – the robot simply follows a line. In the case of the line follower in a Leo Rover, though, the robot follows a track marked by two lines. We decided to go for two lines instead of one to increase reliability. Depending on lighting conditions and some other factors, one line might not have been fully visible in the rover’s camera all the time. It could have been challenging for the neural network to keep the rover on track if some disturbance had appeared. So, a two-line track seemed like a better solution.
The image preprocessing allows a Leo Rover to follow a track between two lines of any color (although it’s best if the lines’ color is different from the ground's). How does it work? From the image that is sent to the rover, the color is captured. Knowing that the line is, say, blue, it allows you to focus on this one particular color and, in the end, after the preprocessing, you get a black and white image in which everything that’s white, in the original image was blue. Thanks to a black and white image, it’s easier and faster to do some calculations.
Other than a Leo Rover itself, we needed an insulating tape and software uploaded to the robot. Sounds easy, doesn’t it? That’s because that’s the easy part. But there’s more to that.
The hardest part while working on this feature was training the neural network. Doing it is quite computationally expensive and even a little bit time-consuming, depending on how much data one has. Not that we complain, though ;)
When it comes to the network architecture for the line follower feature, it could have been either big or small. However, the smaller it is, the faster the calculations are done, meaning that both the training and then, the network operation time (uploading an image and getting an output) take much less time compared to a bigger network architecture.
There are libraries available on the Internet to run neural networks on them. These libraries often convert the neural network to a "lighter" one, so that it performs better on specific devices. A library of this kind is, for example, TensorFlow Lite which we used to convert our neural network model.
To learn how to run line follower on a Leo Rover, go to this tutorial. And as mentioned, it’s not the only new feature we’ve added recently! Make sure to check out our other new integrations on object detection and ARTag follower. They’re all part of the Leo Examples package.