A 1.362Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE
The last decade has witnessed LiDAR becoming one of the most commonly mentioned technologies within the geospatial community and many others, while being used in a wide variety of applicationssome of which were not even thought of 10 years ago. This is part of a transition from 2D representation of our surroundings to 3D with LiDAR technologies playing a key role.
With the advancements in instrumentation and technology, better and more precise systems are now entering the market. However, in the current state-of-the-art there is a lacunaeall the existing laser scanning methods, applications and existing technology rely on the basic assumption that the object being scanned is stationary. This assumption holds true in a number of applications and the system performs more or less as per the expectations whenever the motion of the object of interest is `small’ such that the motion of the object can be neglected.
Obviously, the results obtained by neglecting the motion of the object would be inferior to the case when the object is not stationary. In those cases where the motion of the object cannot be neglected, the conventional (or existing) methods fail miserably and the collected data `appears’ distorted when the data are processed using the existing methods.
The reason behind the failure of existing methods to capture the `true’ geometry of moving object(s) is that a point on the moving object may be observed multiple times resulting in different coordinates each time, as the object is being captured at different positions. These different instantaneous coordinates make the object `appear’ distorted. Hence, to recover the `true’ geometry of the object, a correction needs to be applied to the collected data, which should take into account the object motion.
There are several applications where the object of interest cannot be made stationary, but still requires to be observed in order to measure its "as-is" 3D geometry. A few examples are ships and vessels, aircraft carriers, aerostats, and hot air balloon. There is currently no method available which is capable of providing the `as-built’ measurements of these objects because of their continuous movement. In the absence of the correct as-built geometry, the design (CAD) models of such objects are used to carry out different studies and simulations, e.g., aerodynamic study of an aerostat. These models do not represent the `true’ geometry of the actual objects and hence the results generated from the studies do not present the correct picture, which might lead to serious repercussions.
Currently, there are no methods available to determine if these objects have been manufactured in accordance with the design models, or to determine the damage incurred to these objects in case of some accident, e.g., estimating the damage to a ship that has collided with another ship or object. The method proposed in this article is capable of tackling this problem and thus opens up a new era of applications of laser scanners.
The proposed method of motion correction utilizes a well known principle of physics called `relative motion’ which , says that the observed motion of an object relies heavily on the properties of the observer. In simpler terms, a moving object might appear in motion to an observer `A’ while the same object , might appear static to an observer `B’ . If a moving object is observed from a frame of reference which is moving with the object and follows the same pattern of motion as that of the object, then the `moving’ object would appear `stationary’ in that frame of reference.
Following the same concept, we define a new coordinate systemObject Body Coordinate System (OBCS) which has a unique property that it is rigidly attached to the moving object. Because of its unique property, any point lying on the object would appear static if observed in OBCS. Hence, the problem of motion correction reduces to transformation of 3D data of the moving object to OBCS. To perform this transformation, the following information is necessarily required.
Trajectory of the object in a given reference system.
3D data of the moving object recorded by the laser scanner and converted to the same reference system as above.
By merging the above information using a motion correction algorithm, it is possible to restore the `true’ geometry of the object. To collect the above mentioned information, a GPS/ IMU system is installed on the moving object which would provide the object trajectory in a global coordinate system and the laser scanner is oriented so that is provides the coordinates of points on the moving object in the same global coordinate system.
Experimental Testing and Validation
A series of experiments were conducted to validate the method of motion correction. A `stop-and-go’ approach was adopted to scan an object and observe its trajectory. Using this approach, an object is kept static and then scanned, then the object is moved to a different location and orientation and then scanned again. This process is repeated to obtain five different scans of the static object. The proposed motion correction method should be able to merge all of these scans into one.
There are two reasons for using a `stop-and go’ method instead of a pure kinematic method. The first reason is the unavailability to the authors of synchronization hardware needed to integrate all of the data. Such hardware is otherwise commonly available. The second reason is to be able to perform analysis on the captured data and determine the validity of the motion correction method. It is easy to compare the features in the combined scan where each single scan represents the complete geometry of the object.
Setup and processing: An Indian rickshaw with a cuboid shaped almirah installed at its back is chosen as the object. This setup, as shown in Figure 1, is attached with GPS, IMU and laptop on its top. A control network is established in the field for the purpose of orienting the laser scanner, such that the laser scanner outputs the coordinates of the object in the global coordinate system. The process is repeated using the "stop and go" approach for each of the five positions of the object. The captured data are processed to obtain the georeferenced point cloud. The processing through this step is performed using conventional processing methods, i.e. by assuming the object to be stationary.
The output obtained is the point cloud for each of the five scans in the global coordinate system. The GPS/ IMU data for the moving object is processed to compute its position and orientation for each of the scans. The processed laser data when visualized in standard point cloud software shows five different objects in the space, located at different positions and orientations, as shown in Figure 2.
The obtained output is then processed with the computed trajectory of the object, using the motion correction algorithm. The processing results in the merging together of all five scans, thus resulting in a single combined point cloud of the object. A snapshot of the point cloud obtained after processing is shown in Figure 3. The different colors seen in Figure 3 correspond to each of the single static scans. It can be seen that the cuboid object matches quite well while the hanging wires and the front wheel of the cart appear at multiple locations in the combined data. This is because of the wires and front wheel are not rigid with respect to GPS/IMU. Only the cuboid object, rear wheels etc. are rigid with respect to the GPS/IMU and hence these features match in the combined scan.
To determine the accuracy of the proposed method two perpendicular planes were extracted from the transformed datasets. The orientations of the corresponding planes with respect to each other were compared. It was found that the maximum deviation of the plane from the corresponding plane of a reference scan is about 2 degrees which is quite insignificant, given the fact that a relatively inferior IMU (Roll and Pitch accuracy: 0.25 deg, Yaw accuracy: 0.5 deg) was used in the experiment. Better results can be obtained if a better IMU is used. The results validate the proposed method of motion correction and it is shown that the proposed algorithm works in real life scenarios.
The major application of this technology lies in scanning the objects which cannot be held stationary during the scanning process, like ships, boats, aircraft carriers, aerostats, hot air balloons etc. Although technologies like Flash LiDAR, photogrammetry or structured light based systems are also available and can be used to observe moving objects at short range, no solution exists for long range scanning where the proposed method is unique. The available methods (mentioned above) would fail for objects having large physical dimensions or objects lacking distinctive features, i.e. for homogeneous objects like aerostat or hot air balloon. The proposed method is unaffected by the dimensions of the object and/or unavailability of distinctive features on the object. The proposed setup and method are currently in the prototype stage. A patent has been filed on this technology by the authors. Interested professionals are welcome to collaborate and further develop this prototype in to a fullfledged commercial product.
Note: The authors thank LASTEC/DRDO for funding this research project.
Dr. Bharat Lohani is Associate Professor at IIT Kanpur, India and has research interests in laser scanning technology development and applications. He is also Founder Director at IIT Kanpur spin off Geokno India Pvt. Ltd., which is an emerging company in laser scanning in India.
Mr. Salil Goel earned his Bachelor’s degree in Civil Engineering and Master’s degree in Geoinformatics from IIT Kanpur in 2011.
Mr. Ravi Sharma is a master’s student at IIT Kanpur in Geoinformatics.
A 1.362Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE