Readers of this magazine already understand the utility of lidar for producing digital terrain models and for autonomous navigation. The same things that make lidar ideal for autonomy and surveying, such as physical range measurements, object detection and hazard avoidance, and 3D mapping, could also be useful for navigation and mapping on the Moon. But, the advancements that have supercharged the uses of lidar on Earth may ultimately be insurmountable hurdles. For off-Earth utility, how can we effectively bridge between the differing technical and engineering requirements for autonomous navigation with those for ultra-high resolution terrain mapping? At odds are the “see-and-scrap” autonomous solution (see the objects, then scrap the data) versus “scan-and-save” (save all the points to make maps).
A lidar-based vision and navigation system for a planetary rover is almost a no-brainer. The illumination conditions at the lunar South Pole – where the Sun is never higher than ~3° above the horizon – are nothing but long-shadows and direct-sun. Stereo-photogrammetry will struggle or require on-board lighting, negating most camera-based vision sensing size-weight-and-power advantages. Lidar’s active source imaging capability is likely needed.
The same lidar system which provides hazard avoidance for the rover can also provide ultra-high resolution point clouds of the terrain. The value of ultra-high-resolution terrain mapping of lunar landing and exploration sites is incalculable. Nearly every mission operation to be done on the surface requires topographic context, from where to explore next to documentation of sample locations. Adding construction and habitation facilities to a lunar outpost requires the accuracy and repeat mapping capability of lidar. Arguably most important element is the curation and public engagement opportunities that lidar context mapping provides. Being able to know, in >50 years’ time, exactly what was done on the surface, and being able to virtually take the public to these areas, will be magnificent.
To test operational methods (how to effectively scan and map) we can use Earth-based ground-lidar systems, like TLS and Mobile mapping solutions. These are helping us to understand and optimize path planning and field of view, and the advantages and disadvantages of mobile mapping versus TLS. But, the real challenges that may prevent lidar from being used to its full potential are engineering based, not with the sensors themselves (thermal, vacuum, and radiation issues can be solved), but with the data volumes produced and the transfer of these enormous volumes of data. Space-hardened computers currently can’t provide the on-board computing needed for sophisticated 3D lidar simultaneous localization and mapping (SLAM). While optical communications to the Moon may soon be possible, even that expanded bandwidth may not be sufficient for beaming back gigabytes of data. Lidar can certainly be used for navigation (see-and-scrap), but can we actually navigate with lidar and save all the data?
Frankly the path forward for the adoption of lidar on the Moon consists of clever solutions to work within the existing constraints of avionics packages for planetary rovers. Developing space-bound computing hardware capable of dealing with lidar data processing, volumes, and transfer is likely too costly. Creative solutions from the lidar industry are needed to help bring this technology to the Moon. Moreover, addressing the challenges of see-and-scrap and scan-and-save are not just applicable for the Moon. Improvements in data compression and handling, rapid reduction of duplicate points, and acceleration of real-time SLAM-to-DEM are all beneficial for the future capture of user data and mapping of Earth environments.