A 892Kb PDF of this article as it appeared in the magazine complete with images is available by clicking **HERE**

In about 1822, the French mathematician Joseph Fourier invented the Fourier transform (as a tool to solve the heat equation). This theory provided, among other things, a method for transforming variables from the spatial domain to the spatial frequency domain (or, as is commonly used in electrical engineering, from the time domain to the frequency domain).

Sometime in the twentieth century, the Central Slice Theorem (or Projection Slice Theorem) was invented. The origins are a bit fuzzy; some credit the radio astronomer, Ronald Bracewell, with its invention in 1956. However, the actual inventor is most likely Johann Radon who presented the (to be named) Radon transform in 1917. This fascinating discovery says that a series of measurements of radial density can be transformed (via the aforementioned Fourier transform) into a two dimensional axial view. This, of course, is the mathematical principle of Computerized Axial Tomography (the CAT scan)–your body is shot through radially with a series of gamma rays and a slice view is computationally generated. There was just one major problem: the Fourier transform operations were so time-consuming as to make a practical application unrealistic.

Enter John Tukey of Princeton University/Bell Labs and James Cooley of IBM. These two, in 1965, invented (actually rediscovered a method first used by Gauss in the early 1800’s) an algorithm that logarithmically reduced the time needed to perform a Fourier transform. This invention, the Tukey-Cooley Fast Fourier Transform (FFT) caused a revolution in imaging.

Imagine that you have an operation that requires 1 second to perform. Suppose you will perform the operation on N data items where N is 1,024. Now further assume that your algorithm requires N2 operations (a so-called O(N2) algorithm). Then the total processing time will be 1,0242 seconds or 291.27 hours. Now suppose you came up with an algorithm that would accomplish the same thing but with N ln(N) operations (an N-log N algorithm). Then the total time for the same processing of 1,024 data items becomes 1,024 x ln(1,024) seconds or 1.97 hours! The point here is not that 1 second per operation is particularly indicative of true processing time but rather the dramatic time decrease between the two implementations of the algorithm. The difference of the FFT computing time moved the concept of CAT scanning from a theoretically interesting concept to a flood of commercial medical scanning devices in under five years.

So what does this have to do with LIDAR or point clouds? A similar but quiet revolution has been occurring in photogrammetry. Extracting a digital surface model (DSM) from stereo photographs has been occurring for decades (called disparity estimation in computer vision). However, these models were typically extracted using local correlation windows to match points in the left image to the right image (or, more properly, the base image and the match image). These techniques, at least at the pixel level, become time prohibitive when large disparities in depth occur within the images (for example, tall buildings). Global matching techniques had to be employed to extract the model. Unfortunately, the computational time of the global algorithms was so cost prohibitive as to make the extractions impractical for all but the most esoteric of examples (e.g. using a super computer).

Enter Heiko Hirschmueller of the German Aerospace Center (DLR), Institute for Robotics and Mechatronics. Dr. Hirschmueller invented a new approach (in 2005) to performing pixel by pixel DSM generation in generation times that were orders of magnitude faster than was previously possible. Dr. Hirschmueller’s Semi-Global Matching (SGM) algorithm provides a high speed method of generating digital surface models on a pixel by pixel basis. This means, for example, that image pairs with a 5 cm ground sample distance (GSD) can be used to generate a DSM with a point spacing also of 5 cm. Since Dr. Hirschmueller’s seminal paper of 2005, dozens of implementations of the SGM have arrived on the scene ranging from commercial implementations from the commercial photogrammetry vendors, a cloud service version from Pix4D (www.pix4D.com) to free, open source versions (for example, Open Source Computer Vision, OpenCV, includes Dr. Hirschmueller’s latest block optimized version–www.opencv.org).

It is important to note that per-pixel DSM generation is not restricted to Dr. Hirschmueller’s SGM algorithm. EarthData of Frederick, Maryland (subsequently acquired by Fugro) was one of the first commercial companies to embrace per-pixel DSM generation when it acquired United States rights to the iStar correlation system (the first implementation of "pixel factory"). Thus long before the commercial aerial camera companies were touting the coming advantages of high density DSM, EarthData was delivering the product. Thus while Dr. Hirschmueller’s SGM may not be the only algorithm, it is probably the fastest and most robust.

So finally to the issues of relevance to LIDAR. High density digital surface modeling (HD-DSM) is now being presented by some industry participants as a new, highly disruptive technology. I recently viewed a product presentation from Dr. Alexander Weichert of Microsoft that pitted the UltraCam aerial camera products not against cameras from other manufactures but against LIDAR systems! This is not surprising given that LIDAR is a much more rapidly growing technology than is imagery.

So will HD-DSM replace LIDAR? Will this magazine have to change to "DSM News?" Not likely. Those of us who were around at the advent of LIDAR recall the motivations for its selection over correlated stereo models. The primary problems remain; there must be texture for point detection and at least two image rays from different camera orientations must be reflected from the object space point to be modeled. While much research has focused on addressing the texture issue, there is nothing that can be done about the multi-ray requirement. Enter LIDAR.

LIDAR retains the tremendous advantage that only a single laser ray need reflect from an object space point to discern an accurate height measurement. Thus LIDAR was originally adopted in broad area topographic mapping because image correlation could not effectively deal with vegetation problems (i.e. in vegetation areas, it is very difficult to consistently have multiple photo rays image the same "bare earth" point). This remains true today. Optical DSM also does not work well in areas of low texture with very high depth disparity (e.g. power line detection).

The vegetation issue is clearly illustrated in Figure 1. This is a per-pixel, colorized HD-DSM model with voids (that is, areas where base-match disparity could not be discerned) colored in cyan. Note the obvious multi-ray issue in vegetated areas and obscured regions but notice also some typical texture problems on the roofs. One could not meet a bare earth density requirement by extracting "ground" from such a model.

Does this mean that we dismiss HD-DSM? Of course not! This is perhaps one of the most exciting new data sources to become available since the advent of LIDAR. It has extremely promising applications in areas such as city modeling and, of course, gorgeous 3D visualization.

An area of SGM that is particularly interesting is those situations where it is simply not feasible to fly a LIDAR unit. The most exciting to me is the emergence of micro-Unmanned Aerial Systems (UAS). A UAS (and caution– there may be some argument regarding the exact mass break-over from UAS to small UAS) is an autonomous aircraft with a mass of less than 5 Kg. Until flash LIDAR becomes practical, these UASs will be carrying cameras only. With SGM, we can generate very high quality surface models and use these models in a plethora of elevation modeling applications (site planning, volumetrics for quarries and open pit mines, archeology…). Already you can purchase a complete commercial-offthe-shelf UAS mapping kit. The Aeryon Scout (see Figure 2) is an example of a quadcopter configuration.

So what is the impact of SGM on the LIDAR industry? Well, in my opinion, SGM is another tool in our collection. Perhaps if the only technology one owns is a camera, than SGM looks to be the solution to every problem. However, the vast majority of data acquisition companies own a stable of various types of sensors. As always, they will apply the technology most suited to the needs of the client.

*Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LIDAR production and workflow tools and consulting services for airborne and mobile laser scanning.*

A 892Kb PDF of this article as it appeared in the magazine complete with images is available by clicking **HERE**