Random Points: Data Modeling

A 733Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

LiDAR data are used for a wide variety of tasks. These range from simple dynamic visualization to full 3 dimensional modeling. I am often asked questions regarding the necessary point accuracy and point density needed to support a particular application. Thus I thought I would devote this month’s column to data modeling from point clouds.

Figure 1 depicts a view of a project being processed by a water management district. In addition to the classified ground (shown in orange), this client needs to extract approximate footprints of buildings for use in flood modeling. This figure shows these footprints (actually roofprints) created using automatic extraction in the application LP360 for ArcGIS.

The density of this particular LiDAR data set is only 0.154 points per square foot (yielding a 2.54 feet Ground Sample Distance, GSD) so this raises a few questions. How can we model precise edges with data having this low a GSD? What is the accuracy of these extracted models?

My own background is electrical engineering (and physics) so my major exposure to modeling is digital signal processing (DSP). Digital signal processing is usually concerned with modeling (or modifying) a signal based on sampling of the data. The best example I can think of is sampling music for digital encoding on a Compact Disc (CD). There is an amazing result in DSP called the "fundamental theorem of sampling" (or the Nyquist-Shannon sampling theorem) that states the following: if the maximum frequency of a signal is F then the signal can be completely reconstructed from samples so long as the signal is sampled at a frequency of at least 2F. This two times law is usually referred to as the Nyquist Sampling Criteria or simply the Nyquist rate. This is why music is sampled at 44.1 kHz (44,100 cycles per second). The input music stream is "filtered" such that all frequencies above 20 kHz (a bit above the upper limit of the most acute of human hearing) are removed and then this data stream is sampled at a bit over twice the maximum frequency. The amazing thing is that the sampling theorem does not say the signal can be approximately reproduced, it says it can be exactly reproduced.

So what does this have to do with LiDAR and data modeling? Well, the first interesting class of modeling is the case where nothing at all is known about the object being modeled. This is often the case with bare earth extraction. Here we wish to measure the ground using laser pulses and create from these pulses an "accurate" digital model of the ground with all surface objects (e.g. buildings, vegetation) removed. The Nyquist criteria give us a recipe for this. It says to filter the data to the desired (in this case, spatial) frequency and then sample this data at twice this frequency.

Thus if I want a ground model that is accurate to 1 m in X, Y then I have to filter the ground to a 1 meter response and then sample at twice this rate. The sampling part is easy – I just need a LiDAR collect with a minimum GSD of 0.5 meters. The filtering is the rub! This is accomplished in electronic sampling systems by using circuits (called filters, of course) that block frequencies above the sampling limit. In imaging, this can be accomplished by effectively averaging (or "smearing") the spatial response of the system. This occurs naturally in a LiDAR system by beam divergence.

The general idea is to set up the collect such that the footprints of the beam are just overlapping at the desired sampling rate. In our 0.5 m collect, this will occur with a ground spot size of 0.5 meters. This is an often overlooked parameter that must be considered when executing a LiDAR mission. In fact, you may notice that some LiDAR systems support user adjustable beam divergence. It is critically important to realize that if the pre-filtering step is not enforced, the sampling theorem is violated, even if the sample spacing is sufficiently small. Technically, a phenomenon called "aliasing" occurs when pre-filtering is not properly applied. This aliasing results in high frequencies being folded over into the low frequencies, providing an erroneous result.

If you think about the sampling theorem, an immediate issue comes to mind. How can we ever model "sharp" edges? A step change in a value contains essentially infinite spatial frequency and thus cannot be accurately modeled, regardless of how fine we make our sampling distance. This is illustrated in Figure 2. The top graph depicts a profile view of slowly undulating terrain. This sort of object can be easily reconstructed using digital sampling techniques. However, the lower graph depicts a step change in elevation. This sort of change is very common in elevation modeling. It occurs in bare earth features such as levees and in discontinuities in the terrain such as building footprints. What are we to do? The Nyquist criteria simply cannot be met for these step features and thus they cannot be modeled using a DSP approach.

If we know nothing at all about the scene that we wish to model, we are simply out of luck. However, this is usually not the case. Typically we have some a priori knowledge of the scene content. With a priori knowledge, we can use statistical techniques. These can fall into a number of categories. We will examine two popular methods – template matching and model fitting.

Template matching is used when you know that a specific object may exist in your data set. The template of the object is scanned through the data in a quest for a "good" fit. Usually the template is parameterized since orientation, scale and other parameters that affect the fit vary from scene to scene. In more advanced template matching, we may simply have a mathematical function we wish to fit. A good example of this is a transmission line (see Figure 3).

A line is a very high spatial frequency element in the transverse direction and thus we know this cannot be extracted using DSP techniques. Instead, we might attempt to statistically fit the mathematical model of a transmission line to the data. A hanging cable obeys a catenary curve, defined by a parameterized hyperbolic cosine. Thus I can move through the data, attempting to fit a hyperbolic cosine with adjustable parameters (see Jwa and Sohn in the December 2012 issue of Photogrammetric Engineering & Remote Sensing if you are interested in a detailed discussion of this).

Template fitting examples frequently occur in as-built modeling using close-range photogrammetry or static laser scanning of constrained environments such as process plants. Here we have a limited collection of templates such as pipes, valves, elbows, flanges and so forth. An actual manufacturer’s catalog can be used to derive precise templates. One then moves through the data, attempting to statistically fit this finite set of templates to the data. Sophistication can be added by implementing contextual (or semantic) modeling such as valves that must be connected to pipes. Pipes don’t normally dangle, open-ended and so forth.

Template modeling does not work well when the objects of interest do not match a catalog of defined objects. Such is the case for building extraction. The problem here is that, except in unusual cases, there are no "standard" templates for the object we are attempting to extract. In this sort of feature extraction, we typically resort to statistically fitting geometric primitives and then attempting to construct the object of interest from the primitives.

Let’s consider the case of extracting "boxy" buildings (a different approach would be needed for curvy buildings such as domed structures). We might first extract the ground since this will serve as a reference surface. We then might search the point cloud for "planar surfaces between a lower and upper height, relative to the ground."

In LP360, we probe the point cloud with a rotating axis, examining the spread of data along each axis. A plane will have only noise spread in the direction of its normal (those of you involved in feature extraction will recognize this as a variant of the ubiquitous Principal Component Analysis approach). Once we have planar surfaces, we filter out the ones that do not meet construction criteria such as minimum size, permissible slope relative to horizontal and so forth.

We next look for connectivity between planar surfaces (do they belong to a compound roof?). Once all of these steps are performed, we have collections of points classified as Building (class 6 in the ASPRS LAS scheme). The final step is to convert these point clusters into polygons that are "roof-like." Here we use point tracing algorithms that are constrained by criteria such as "lines must connect with angles of zero degrees or 90 degrees" and so forth.

It is really these same statistical modeling techniques that allow us to do "heads-up" digitizing in LiDAR data (or image data, for that matter). An example is tracing the centerline of a roadway by digitizing the paint strip. The LiDAR data will not support extraction of the centerline using direct DSP techniques (since this line is an edge or high frequency feature). What we are really doing is a best fit of straight line segments to the point cloud. If we are doing unguided drawing, we are subconsciously doing a best fit of the straight line to the data (something the human brain is exceptionally good at). If we have guided drawing, the software is doing the fit (via a linear regression algorithm) for us.

It is very important to carefully plan a LiDAR collection based on the data you wish to ultimately extract. In nearly all cases, the data extraction approach will be a combination of DSP techniques (for example, extracting a bare earth model) and template, primitive fitting approaches. An assessment of the ultimate accuracy of the product will be a function of the extraction technique. For DSP methods, it will be limited by the sample spacing. For statistical extraction, the error is computed based on the fundamental point accuracy rolled into the statistics of the fit. Regardless of the approach that you will use, remember that storage is super cheap and recollection is very expensive. Always collect at the maximum affordable data density.

Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LiDAR production and workflow tools and consulting services for airborne and mobile laser scanning.

A 733Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE