Latest news about Bitcoin and all cryptocurrencies. Your daily crypto news habit.
It’s critical that we familiarise with the coordinate system of LiDAR to process point clouds generated by the sensor mounted on an autonomous car. In this post, we will look into the coordinate system that is used by LiDARs.
LIDAR — Light Detection and Ranging — is used to find the precise distance of objects in relation to us.
A LiDAR calculates the distance to an object using time of flight.
When a laser pulse is emitted, its time-of-shooting and direction are registered. The laser pulse travels through the air until it hits an obstacle which reflects some of the energy. The time-of-acquisition and power received are registered by the sensor after receiving the portion of energy. The spherical coordinates of the obstacle are calculated using time-of-acquisition which is returned by the sensor along with power received(as reflectance) after each scan.
As LiDAR sensor returns reading in spherical coordinates, let’s brush up with the spherical coordinate system.
Spherical Coordinate System
In a spherical coordinate system, a point is defined by a distance and two angles. To represent the two angles we use azimuth(θ) and polar angle(ϕ) convention. Thus a point is defined by (r,θ,ϕ).
As you can see from the above diagram, the azimuth angle is in X-Y plane measured from X-axis and polar angle is in Z-Y plane measured from Z axis.
From the above diagram, we can get the following equations for converting a Cartesian coordinate to spherical coordinates.
We can derive cartesian coordinates from spherical coordinates using below equations.
LiDAR returns reading in spherical coordinates. As you can see from the below diagram, there is a slight difference from the above-discussed convention.
In the sensor coordinate system, a point is defined by (radius r, elevation ω, azimuth α). Elevation angle, ω is in Z-Y plane measured from Y-axis. Azimuth angle, α is in X-Y plane measured from Y-axis.
Azimuth angle depends upon the position at when a laser is fired and is registered at the time of firing. Elevation angle for a laser emitter is fixed in a sensor. The radius is calculated using the time taken by the beam to come back.
Cartesian coordinates can be derived from the following equations.
The Cartesian coordinate system is easy to manipulate and hence most of the time we need to convert spherical coordinates to a Cartesian system using the above equations.
So a computation is necessary to convert the spherical data from the sensor to Cartesian coordinates using the above equations. Drivers of LiDAR sensors usually do that for us. For example, Velodyne LiDARs sensors provide a ROS package- velodyne_pointcloud for converting the coordinate system.
A car coordinate system with LiDAR mounted on it.
Above diagram shows the Cartesian coordinate system of a sensor mounted on a car.
So in this post, we learned on how a LiDAR calculates the distance and the coordinate system involved.
References for further reading:
- A story on LiDAR sensor is changing self-driving car blog by Oliver Cameron is a good read.
- Read more about Spherical coordinate system at Wikipedia.
- Velodyne VLP-16 manual is a good start to know more about how LiDAR works.
LiDAR Basics: The Coordinate System was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.
Disclaimer
The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.