A 4.455Mb PDF of this article as it appeared in the magazine complete with images is available by clicking **HERE**

As many of you regular readers of this column know, I have been restructuring the GeoCue Group to ensure that we are riding the crest of the point cloud wave. My prediction is that within the next 18 months, point cloud data derived from images collected by small Unmanned Aerial Systems (sUAS) will dwarf the volumes of point clouds being collected from airborne LIDAR.

One of the big transitions that I foresee is the method of computing stockpile and pit volumes in fairly localized areas (say several square kilometers). Of course, before we go in to the specifics of methodology, it is useful to ask why these data are needed in the first place!

In the simplest of cases, a construction company phones up a gravel quarry, needing 50,000 cubic yards of " road crush gravel for delivery over a two week period. The quarry operator needs to know what is on hand in existing stockpiles. Of course, very experienced operators have a pretty good notion of the answer to this question without the need to do a volumetric analysis. However, stockpile inventory represents "goods on hand" for the quarry.

Generally Accepted Accounting Principles (GAAP) require a financial accounting of this on-hand inventory (it is a ready asset on the books). Here a simple eyeball approximation is not acceptable; typically an accuracy of 3% or better is required. Thus, there are true business drivers for stockpile analysis. Imagine the convenience of having a nearby sUAS operator who can produce these data from a small camera or LIDAR.

The general problem we are trying to solve is determining the volume of material contained between the "hull" of the material and the "base" on which the material rests (see Figure 1). In the case of a pit, the hull becomes the surface of the quarry and the "volume" is the void space defined by the base-hull model.

Today these volumetric analysis projects are carried out using groundbased techniques (GPS, total stations) or airborne methods such as traditional photogrammetric stereo methods or LIDAR. Ground-based techniques tend to be very labor intensive and require personnel to work in harm’s way. Indeed, in many cases stockpiles are not accessible to ground personnel. For the manned-platform airborne case, mobilization expenses (the cost of scheduling and ferrying a manned aircraft) make these operations relatively expensive. Therefore, the sUAS is a very attractive solution for small areas.

We have added volumetric analysis tools as well as cross-section export capability to our general point cloud software, LP360. This allows volumetric analysis and visualization to be performed at the desktop. Very importantly, it allows volumetric data to be obtained in software that is completely independent of the algorithms that generated the point cloud data. In general, you do not want to use the same tool that computed data sources (such as the correlation software used to create a point cloud) to do the volumetric analysis; you need independent confirmation of accuracy. There are many different modalities for computing a volume but perhaps the most useful interactive method is to simply draw a polygon that represents the base (see Figure 2). In this process, the base is sketched in 2D in the Map View. An automated algorithm computes the elevation of the vertices of the base, yielding a 3D surface to which the hull is compared. This process can also be automated, of course, by importing a file of pre-existing 2D or 3D base polygons.

So the above describes a typical scenario for computing volumes in rapid fashion from point cloud data whether those data be from correlated imagery or LIDAR. Now the question is, how accurate are the results? Well, we feel quite confident in the validation of the algorithms so the question boils down to "how accurately do the data represent the hull surface?"

We have been engaged by several clients to address this question for different data acquisition scenarios. The analysis turns out to be much trickier than one might think. The real question is one of "ground truth." For example, given a dense point cloud derived from redundant imagery (a Photo-Correlated Digital Surface Model, PCDSM), how does one determine the accuracy of the cloud?

The best case scenario would be to have very high density aerial LIDAR data (say 6 cm post spacing) with excellent relative accuracy tied to surveyed ground control points (GCP). So far, we have not received this type of validation data. We usually receive a high density PCDSM (say 5 cm nominal post spacing) and LIDAR data that is more on the order of 70 cm nominal post spacing. These LIDAR data will work fine for a relatively smooth stockpile but when the surface exhibits high frequency undulation such as that of Figure 3, a low resolution reference source is not adequate for measuring the accuracy of the PCDSM.

In this particular example, we were provided check points obtained using stereo photogrammetric techniques. This grid of points is depicted in Figure 4 in both plan and profile views. Note that the points generally exhibit good conformance to the PCDSM. However, the photogrammetric checkpoints were no doubt developed using an automated correlator (often called an automated "Dot on Ground", DOG, tool) within the stereo compilation workstation. This means we are actually comparing one correlator (the PCDSM) to another (the DOG)! While the relatively good correlation is comforting, it is not ground truth.

It is instructive to compare numerical results between two different software applications used to generate the PCDSMs (these are popular Structure from Motion commercial software packages used to generate point clouds from overlapping images). The results of Correlator A are depicted in Figure 5 and the results from Correlator B in Figure 6. These results were obtained using the control/checkpoint analysis functions in LP360. These quality check algorithms "probe" a point cloud surface with a collection of control points and report the difference between the Z of the control point and the Z extracted from the point cloud at the probed location.

First we note that the Root Mean Square Error (RSME) is relatively the same between A and B (about 16 cm for A and about 18 cm for B). However, the difference in mean error is suspicious. For A, the mean error is 0.7 cm whereas for B the mean error is -7.2 cm. This tells us that B is consistently under the point cloud surface (if we take our photogrammetric points to be "truth") and hence will generate a volume that is less than that computed using data from Correlator A. Also note that the error range for B is much larger (about 2.2 m) than for A (about 1.5 m).

So what is the conclusion of all of this? Well, first of all, it is very easy to compute volumes from point clouds using an appropriate, independent tool such as LP360. This tool need only be validated a single time for its correctness of algorithm implementation. More importantly is the accuracy of the software used to generate the point cloud from imagery (for example, PhotoScan, Pix4D, Correlator 3D) or of the LIDAR system. Hopefully you have seen from the above analysis that an independent source of "ground truth" is necessary to prove this out.

I have given a rather detailed example of using photogrammetrically derived points for ground truth but pointed out the fallacy in doing this. A much more rigorous source of independent truth will be necessary to provide the confidence we need to draw a satisfactory conclusion. Another point (pun intended) that I hope is clear is the necessity of sampling size being appropriate to the surface texture. Surfaces with high frequency undulations will have to be sampled at a much higher frequency than will "smooth" surfaces. In a future Random Points, we will have a look at the comparison of dense, accurate LIDAR data to PCDSM. Till next time, I hope your business becomes voluminous!

A 4.455Mb PDF of this article as it appeared in the magazine complete with images is available by clicking **HERE**