A 994Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE
Light Detection and Ranging (LiDAR) systems, in various forms, have become an essential part of autonomous navigation of automobiles and unmanned ground vehicles (UGVs). Many LiDAR systems have been tested in various robotic platforms and UGVs which demonstrated mostly obstacle avoidance capabilities. In addition to obstacle avoidance capabilities 3D perception is essential to many mobile robotic systems, as robots are increasingly required to operate in harsh environments and interact safely and effectively with humans, other vehicles, and their environment. Furthermore, in many applications besides locating the obstacles, identification and classification of objects is important for situational planning.
One of the major challenges of current LiDAR systems is the ability to sense the environment and to use such perceptual information for control. For example, on a bumpy dirt road the robot should constantly determine which bumps and holes are small enough to be negotiated (possibly by slowing down) and which ones should be avoided. Vegetated terrain introduces one more degree of freedom to the problem: what is considered an "obstacle" from a purely geometric point of view may not represent a danger for the vehicle if it is composed of compressible vegetation (for example, a tuft of tall grass or a small bush).
Other challenging situations include the presence of negative obstacles (such as ditches), elements such as water, mud or snow, and adverse atmospheric conditions such as fog. Most current lowcost LiDAR technologies are single line scanners and have limited capabilities for providing 3D imaging data. On the other hand, flash LiDAR systems based on two-dimensional sensor arrays and multi-element sensor arrays provide 3D imaging data but they are cost prohibitive to deploy in robotic and UGV platforms. Hence, there is need for a low-cost, 3D imaging LiDAR system that can operate both indoors and outdoors with a high reliability in harsh environments.
Scanning LiDAR System Description
Spectrolab has developed a low cost, compact Micro Electro and Mechanical Systems (MEMS) scanning LiDAR imager, based on technology originally developed at the Army Research Laboratory for military robotic applications that was designed to address these hurdles and meet the military’s robotic requirements for a low cost, compact, short range LiDAR with a wide field of view1. Figure 1 shows the block diagram of the LiDAR system.
This system operates based on a tip/ tilt mirror scanned time-of-flight (TOF) principle. The LiDAR transmits a short optical pulse as a means to determine range to a target area and a two-axis MEMS mirror to establish the angular direction. Referring to Figure 1, a pulse trigger fires a fiber laser, operating at 1550 nm wavelength, to emit a short 2-3 ns pulse of light at a high rep rate that is collimated and then directed to the surface of a small MEMS scan mirror. Operating at 1550 nm wavelength has several advantages such as higher laser power can be used to get longer range, less susceptible to rain and fog, and most importantly it is a covert wavelength for many military applications.
Electrostatic actuators driven by a high voltage are used to tilt the mirror to give a full scan angle 60 x 30. Light backscattered from the target is collected and the time of flight is measured. This process is repeated sequentially for each of the 256(Horizontal) x 128(Vertical) pixels per frame. The image frame data (amplitude and range for each pixel) is transmitted via Ethernet for display at a 5 Hz frame rate. The maximum range of this model of LiDAR camera is 20 to 25 meters, depending on target reflectivity. The minimum range is less than 6 inches.
The prototype scanning LiDAR camera (SpectroScan3D) is shown on the right side of Figure 1. The design is portable and the unit weighs about 5 lbs. This LiDAR system is a low-cost system since the system is built using many COTS and simple InGaAs PIN receiver components. The only expensive part is the fiber laser.
Figure 2 shows a sample video frame from the LiDAR camera. On the left is a range and intensity coded false color image beside a black and white amplitude-only image produced from the same data within an OpenGL based software platform that produces live streaming point cloud image video that can be manipulated in 3 dimensional space with the mouse. As can be seen, the resolution is good enough that targets may be identified with software algorithms. Thin strings and cords can be seen and ranged from several meters away as a consequence of the system’s high resolution.
Scanned LiDAR Field Demonstrations
The Army has demonstrated brassboard and breadboard units on iRobot Packbot robotic platforms2. The prototype unit shown above, designed for commercial use, has been further demonstrated in several different environments. Figure 3 shows the LiDAR image beside a normal color camera image of an outdoor scene. As can be seen from the LiDAR image, the system performs well outdoors and small objects can be easily resolved in the scene.
The LiDAR camera was also demonstrated on a robotic vehicle platform integrated into behavioral software developed by 5D Robotics. Several behaviors were demonstrated including human tracking/following and obstacle avoidance. The scanning LiDAR camera enhances the robotic platform’s capabilities over the standard line-scan rangers in that vertical height may be included in the obstacle identification algorithms. Figure 4 shows the robotic platform in a follower mode with line scan LiDAR systems supplemented by the SpectroScan3D data shown in red.
Mapping Applications
Although originally designed for military robots, this system also shows promise in a number of other application areas such as automotive vehicles, architectural mapping, mine exploration, machine vision and manufacturing. Several of these exciting opportunities are an ideal fit for scanned LiDAR imagers. Mine mapping robots are used to map out caverns deep underneath the earth’s surface. They require LiDAR systems with a wide field of view to cover the cave walls as they move down the center of the shaft.
Image stitching can be done using either onboard or built-in GPS. Indoor or deep underground mapping without GPS signal can use the LiDAR data with SLAM (Simultaneous Localization and Mapping) algorithms to provide map registration. Architectural mapping is also a potentially good fit for a scanned LiDAR technology. Traditional tripod mounted units have the disadvantage in that they take time to map a scene while scanning LiDAR imagers can map architectural features quickly and actively while mounted to a moving platform for larger coverage area. Another active area of interest for scanning LiDARs is in various docking operations for NASA’s space station and other space based vehicle navigation applications.
Future Work and Product Improvements
Near term and easily achievable performance improvements such as longer range (>50 meters), multiplepulse detection (up to 3 pulses), and higher frame rate (> 10Hz) will enable this LiDAR system to meet the requirements of many other applications. Environmental testing is a near term priority. The survivability of the MEMS scanning mirror under heavy shock and vibe loads is of interest to military and automotive users. While the current version of the SpectroScan3D detects a single return pulse, several existing LiDAR systems have the ability to detect multiple returns of the same laser pulse. This gives better quality point clouds along with the ability to determine object transparency.
The LiDAR market is extremely cost sensitive. The current system cost of about $25k in small quantities (<10) and we expect quantity price reduction as volume increases. The application areas for these low cost scanned LiDAR imagers are nearly limitless and the exploration of these areas has just begun.
Rengarajan Sudharsanan is the director of sensor products at Spectrolab Inc. and Robert Moss is an engineer at Spectrolab Inc.
References
1 R. Moss, P. Yuan, X. Bai, E. Quesada and R. Sudharsanan, "Low-cost compact MEMS scanning LADAR system for robotic applications," in Proc. of SPIE Laser Radar Technology and Applications XVII, 2012.
2 B. Stann, J. Dammann, J. Enke, P.-S. Jian, M. Giza, W. Lawler and M. Powers, "Brassboard development of a MEMSscanned ladar sensor for small ground robots," in Proc. of SPIE Laser Radar Technology and Applications XVI, 2011.
A 994Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE