Recent advances in LiDAR sensor technology has enabled low-cost laser scanners sufficiently light to be carried by low-cost drones. However, these sensors provide relatively low resolution making sensor alignment and boresight calibration difficult. Conventional techniques for LiDAR boresight calibration are based on the use of Ground Control Points (GCPs). Considering the challenges in identifying GCPs from low-density point clouds captured by these LiDAR sensors, such as that shown in Figure 1, we present a feature-based registration method that determines the boresight calibration parameters using control planes instead of individual points or any GCP’s.
Mobile Mapping and Sensor Alignment
Mobile Mapping is the technique of acquiring accurate geospatial information of a scene using multiple sensors mounted on a mobile platform. A typical Mobile Mapping System (MMS) includes a Global Navigation Satellite System (GNSS) receiver, an Inertial Measurement Unit (IMU), and an active (LiDAR) or passive (camera) vision sensor. The accuracy of the MMS is dominated by the quality of the GPS/IMU trajectory and Sensor alignment. Sensor alignment is even more critical when the system is mounted on an UAV exposed to vibration effecting the alignment between IMU and LiDAR sensors. Additionally, due to limited field-of-view provided by light-weight MMS’s, sensor alignment may need to be adjusted per project and changed based on objects of interest. Hence the alignment between IMU and camera or LiDAR sensors needs to be determined frequently, including after payload integration, project calibration, or scheduled calibration.
What Control Features Are Available?
Considering the limitations of identifying control points from low density scans, researchers have used higher level control features such as lines, planes or free-form surfaces that are common between the LiDAR point cloud datasets. Figure 2 illustrate a control surface (dark gray) and an arbitrary surface (light gray). The arbitrary surface can be registered using control points, lines or surfaces that are visible in both data sets.
This paper demonstrates a method that takes all points constituting conjugate planes into the registration mathematical model in contrast to just a few sampled points. The proposed data-driven calibration method assumes that only mounting parameters constituting 3D rotation (boresight rotation angles) and 3D translation (boresight translation) exist between raw and registered point clouds. Hence one-to-one correspondence between mounting bias parameters and determined rigid body transformation parameters can be considered identical, as expressed in equation 1:
where, ? vector consists of coordinates of points in the control plane; ?(?, ?, ?) is the 3D rotation matrix formed with the boresight rotation angles; ? is the boresight translation; ?– is the vector that contains coordinates of points on the LiDAR plane, and ? refers to random errors. The assumption of eliminating other systematic errors is justified as the proposed boresight calibration is performed in a laboratory without using GPS/IMU data and by keeping the MMS static for the duration of the calibration. In the case of the control-plane approach, the boresight calibration method will determine the alignment between IMU and LiDAR frame by minimizing the volume formed between low point density LiDAR surface with unknown boresight parameters and the control surface.
Volume Minimization Algorithm
The basic idea of the volume minimization method is to find transformation parameters that generate minimum volume between corresponding 3D planes in two coordinate systems. The volume computation between surfaces is not trivial. Hence we propose the use of 3D Delaunay triangulation to compute the volume between the corresponding surfaces. In this approach, 3D Delaunay triangulation is used to form a surface that represents the volume that needs to be minimized between conjugate planes. The total volume of all tetrahedra created through 3D Delaunay triangulation is given by Eq. (2) for n-tetrahedra. However, in the case of the boresight calibration problem, the LiDAR point cloud points will carry boresight angles and boresight translation as unknowns which should be determined by minimizing the volume between control and LiDAR surfaces. The control surface will be in IMU (body) frame and whereas LiDAR surface will be in a local LiDAR sensor frame. Though any free-form surface can be used in Volume Minimization, for simplicity planar patches are used:
The proposed approach for determination of boresight alignment as 3D rigid body transformation is shown in the following steps:
- Select 3 or more coplanar points on corresponding LiDAR and control point clouds
- Establish the 3D Delaunay triangulation of both LiDAR plane and control plane points
- Classify the generated tetrahedra into the following three types as illustrated in Figure 3
- Type I—one point from control plane and three points from LiDAR plane
- Type II—two points from control plane and two points from LiDAR plane
- Type III—three points from control plane and one point from LiDAR plane
- Determine the rigid body transformation parameters that minimize the volume of each category tetrahedra. The volume between two planar surfaces from LiDAR and control data is the sum of volume of the tetrahedra that are formed only between them. In order to determine the 3D rigid-body transformation parameters that transforms the LiDAR points into control plane coordinate system or IMU frame, the volume equation needs to be written in terms of unknown transformation parameters. In accordance to the type I, II and III tetrahedra, coordinates in Eq. (2) can represent either coordinates of points in the control plane or coordinates of LiDAR plane points in the control coordinate frame denoted ??,?i,??. However, transformation between the LiDAR sensor frame and the control coordinate frame are unknown. Hence ??,?i,?? need to be expressed in terms of a 3D rigid body transformation that also refers to boresight parameters as shown in Eq. (3):
In the equation above, ??,?i,?? are the coordinates of the LiDAR point cloud in the control coordinate system. The coordinates of LiDAR points in sensor coordinates are given by ?′,?′ and ?′. Each observation equation in the form of Eq. (2) has to be expanded by plugging in Eq. (3). Then partial derivatives are taken with respect to unknown transformation parameters in terms of boresight angles denoted ?, ? ??? ? and boresight translationta denoted ?? , ?? and ?? for each tetrahedral types. The transformation parameters that minimize the volume of each tetrahedron is determined by volume minimization. Hence each tetrahedron volume equation is considered an observation equation for the least squares adjustment. As there are six unknown parameters, at least 6 tetrahedra that are formed between conjugate planes in LiDAR and control surface are needed. However, to avoid degeneracy, tetrahedra that represent three mutually perpendicular conjugate planes are required.
Lab Calibration and Experiential Results
We tested our approach by running the proposed algorithm in a closed room where multiple planar surfaces exist. The estimated calibration values were later used and evaluated on a UAV flight mapping MMS mission. Our MMS consisted of a Velodyne HDL 32E LiDAR and Geodetics’ Geo-iNAV inertial navigation system. The MMS with installed LiDAR and GPS/IMU with its axis clearly visible were placed in the middle of the room such that the LiDAR sensor could capture most of the planes in the room without having to move the MMS. For ground truth, a 3D point cloud of the room was collected using an independent static Terrestrial Laser Scanner (TLS). By default, the static TLS collects data in its local coordinate system with its origin at the centroid of the scanning mirror after removing offsets. In order to transform the coordinate system of the TLS point cloud to the IMU coordinate system, first the plane containing X, Y axes are extracted from TLS data. Then IMU X, and Y axes that lies on the XY plane are digitized from the point cloud. The Z axis is perpendicular to XY plane and passes through the origin. By making these adjustments, TLS data is transformed to IMU frame and is used as control surface. After the TLS truth data collection, the MMS LiDAR sensor was used to collect 3D data of the room. Figure 4 shows the 3D laser scan from TLS.
Similarly to the static TLS, by default the MMS LiDAR sensor data is collected in LiDAR sensor coordinates to which misalignment needs to be computed with respect to IMU frame. After the point cloud collection of both TLS and MMS LiDAR sensors and preliminary processing of TLS data, there will be two point clouds of the room, one in IMU frame and the other in LiDAR sensor frame. As discussed in earlier, in order to use the Volume Minimization algorithm, the LiDAR sensor should be placed such that at least 3 perpendicular planes are visible in order to avoid the volume minimization problem. Three such planes and two additional planes in the dataset are chosen from both the TLS and MMS LiDAR point clouds. Then volume between corresponding planes are determined using 3D Delaunay triangulation. The derived boresight parameters are shown in Table 1. The resulting estimated standard deviation per unit weight is 0.0361 cubic meters.
After boresight calibration in the lab-environment, the results were tested on a MMS UAV test flight. The flight plan was designed with multiple cross paths within the area of interest with the purpose of emphasizing the impact of uncertainty of boresight parameters on the georeferenced point clouds on the crossing paths. Figure 5 shows the reconstructed point after applying the calibrated boresight parameters obtained with the presented approach. Once the boresight value determined from the results of the new approach is used, the point clouds are all aligned properly within the RMS of ±5cm.