Random Points: Resolution Versus Precision: I Feel Inverted!

A 1.448Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

As we all know, a "Moore’s Law" of the LIDAR hardware industry seems to be a doubling of data resolution every few years. In fact, resolutions have become so high that we may have to begin to modify how we treat LIDAR data (as well as point clouds derived from dense image matching).

Recall (discussed in past Random Points) that resolution is the granularity with which we can measure something. With LIDAR point cloud data, resolution is the density in the planar (X, Y) dimension and fineness with which we can read elevations in the Z dimension. For example, data with a nominal point spacing (NPS) of 20 cm has twice the planimetric resolution of data with an NPS of 40 cm (and four times as many points).

Precision, on the other hand, is the repeatability of a measurement of the same point in object space. Imagine scanning a perfectly flat, horizontal sheet of steel with an airborne laser scanner. The variation in Z of the laser point readings is a measure of precision. Therefore, precision in this case is related to the variance of the vertical readings over our theoretical flat plate. A system with perfect precision would show a variance of zero whereas a very "noisy" system would have a high variance. It is important to note that resolution and precision are independent attributes of a scanner.

With the ever-increasing resolution of scanning systems, we are now beginning to see the situation (particularly in lower cost systems) where resolution and precision are becoming inverted; that is, resolution is exceeding precision. An example of this is shown in Figure 1. One of the immediate considerations when resolution exceed precision is the measurement of vertical accuracy. The current American Society for Photogrammetry and Remote Sensing (ASPRS) vertical accuracy specification instructs the analyst to compute check point residuals by triangulating the LIDAR points and then measuring the vertical distance from the check point to the intersected triangle surface. However, with surfaces exhibiting high resolution (lots of small TIN facets) but low precision (lots of vertical excursions of the TIN nodes), it is obvious that we are measuring noise rather than the true local elevation of the point cloud. This is evident in Figure 2 where the check point is shown in red in a sea of highly variable, small triangles. Even though this is an area of "flat" data, a small planimetric shift in the check point can result in Z excursions to the extent of the noise. This is not good because it is not repeatable (meaning the precision is low!).

In addition to a miscalculation of true vertical accuracy, there are a number of other impacts when there is a resolution/precision inversion. One of those is the performance of automatic ground classification filters. Most successful automatic ground classification algorithms contain some variant of the Axelsson1 adaptive TIN algorithm. Among other techniques, this algorithm looks at the slope of triangle faces relative to a base. If the slope up to a vertex is too steep, the point is not added to the candidate ground surface. Obviously the wildly sloping facets of Figure 2 will present a problem for this and similar algorithms.

An additional issue that presents is in the generation of contours. When resolution exceeds precision, the noise presents as high frequency "jiggle" in contour lines (see Figure 3). Not only is this an unacceptable cartographic presentation but it is probably not indicative of the true nature of the surface being modeled.

So what is the solution? Well, the obvious answer is to not use a sensor with low precision! Unfortunately, this is not always practical. Low cost, low precision LIDAR sensors are quite popular in drone-based collection systems. Many times the nature of the project prevents the use of a high quality sensor (there is simply not enough profit margin in the project). Instead what is needed are methods to "smooth" these data that do not violate the integrity of the true surface model. For accuracy probing, we probably need to specify an algorithm such as Inverse Distance Weighting (IDW) or, better still, the distance the check point lies from a fitted planar surface along the surface normal that intersects the point.

We at GeoCue are working on some algorithms to correctly deal with this issue. It is not a straightforward task because we cannot make assumptions about the underlying terrain. In areas of rapid vertical terrain changes, very high frequency (e.g. high resolution) sampling are required. In areas that are relatively smooth, low frequency sampling is appropriate. Thus, the solution must be a frequency adaptive low pass filter (the EE’s among you will recognize this as an adaptive Nyquist criteria). Until such filters become routine, you can improve these types of data sets by applying a filter such as a sub-sampling median filter. This is not idea since it is not frequency adaptive but it will generally improve results. In the meantime, use good shock absorbers when sampling these terrains*!

*Axelsson, P. DEM generation from laser scanner data using adaptive TIN models. Int. Arch. Photogramm. Remote Sens. 2000, 33, 111118

Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LIDAR production and workflow tools and consulting services for airborne and mobile laser scanning.

A 1.448Mb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE