There are several techniques to determine the position of a robot in space. For example, odometry is particularly used with systems that move with the help of wheels. This consists, from a known position, in measuring the movement of each wheel in order to estimate the final position.
It’s often combined with other techniques because odometry alone isn’t very accurate. Obviously this technique is not usable on walking robots.
SLAM (Simultaneous Localization And Mapping)
SLAM is a set of techniques that allow a robot to perform a mapping of its environment and to find its way in it. This mapping can be done in different ways: using a RGB camera, a 3D camera or a LIDAR. Here we will focus on the use of LIDAR. But first, what is it? LIDAR is a device (see the picture above) that emits a laser and calculates the time light takes to come back in order to estimate the distance of surrounding obstacles. 360 LIDAR is generally used, it turns on itself, returning measurements including distance and angle. With these data we can determine what are the obstacles around the robot (walls, furniture etc).
Example of data collected during a scan with a LIDAR:
Each point represents a measure. By drawing these points, we can clearly see the contours of a room.
Here’s another LIDAR scan, after the robot moved slightly:
We recognize the contours seen on the first scan, shown from a different angle.
By calculating the offset between these two scans, it’s possible to accurately determine the movement of the robot.
If for you it’s pretty simple to find the similar points between the two scans, a machine without “intelligence” must rely on a specific methodology to achieve it. It’s necessary to have an algorithm that can find two points on each scan that correspond. Then we use these points to deduce the vectors and calculate the offset.
We will for each point compute the surrounding distance from other points and make a total.
Once we’ve done this with the first scan, we do the same with the second one. We compare them while finding those who have the nearest sum. It gives us corresponding points between the scans.
The problem with this technique is that it requires a lot of iterations and computation. For example, on a MacBook Pro core i7 2.2 Ghz, it will take about 1 second to calculate and iterate on all points. On a Raspberry Pi 3 (which is the type of hardware used in robotics because of its small size, low price and low energy consumption), it will take about 10 times longer. Therefore, it’s necessary to optimize the algorithm if we want to use it on low powered hardware. By using pruning techniques, I managed to reduce significantly the computation time. It takes less than 100 milliseconds for my algorithm to determine the corresponding points.
Here’s a demonstration, simply click on the “calculate” button to see both scans overlap:
Once the mapping and localization part is done, it remains to implement an algorithm allowing the robot to determine the shortest way to get from one point to another through the various obstacles in its path.
For example we can use a genetic algorithm (principle of “natural selection” on a population of solutions). Click on the “start” button to calculate the shortest route:
There are much more efficient algorithms for SLAM. Some are provided with ROS (Robot Operating System) but I thought it was interesting to design a solution from scratch in order to have a better overall understanding of one of the main issues in robotics.
Here’s a link to my cloud point mapping algorithm, available in both JS and Python (one of the most used languages in robotics): https://github.com/eddybordi/cloud-point-mapping