Random Points: Out of Control?

A 651Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE

I used part of my Thanksgiving holiday to catch up on some industry reading. What I find is rather disturbing. The offending information is primarily coming from overview articles in online sources regarding small unmanned aerial systems (sUAS) and their data collection accuracy. While much of the information is useful, the statements regarding achievable accuracies and the techniques that can be employed are, for the most part, patently false. I find this to be a very disturbing trend because while practitioners of the art (e.g. professional land surveyors and others schooled in geomatics) will recognize the errors, executives making decisions about deploying technologies could be led to take entirely wrong decisions.

All of this made me think about geopositioning and how we make statements about project accuracies. For example, I often read articles that state project accuracies in terms of residuals measured from signalized ground control points (see Figure 1 for an example of a control/check target).

The idea is to lay out control and check (test) points in a pattern conducive to modeling and testing (a subject for a different article). The control points are used in the actual modeling. For photogrammetry solutions based on Structure from Motion (SfM), the Control points are used to warp a free net model to a real world spatial reference system via a rigid body transform (usually a Helmert transformation). They are used in a similar way in LIDAR data "fitting" although often in a less rigorous manner. Since these control points were used in the actual modeling, they cannot be used for accuracy testing (since the model will likely exhibit its best fit at these control points). Points withheld from the modeling (check/test points) are used in assessing the accuracy of the model by analyzing the distance from the test point to the corresponding point in the model (so-called residuals). The common assessment metric is the Root Mean Square Error (RMSE). An example of an error report is shown in Figure 2.

Note that in this report we have an RMSE error of 0.043 ft planar and 0.053 ft vertical (1.3 cm and 1.6 cm, respectively). Many articles I have read report this as the accuracy of the project but what does this mean?

When we speak of accuracy, we mean how close is a measured position to a known position? This known position is called a datum. It is this known position that is often omitted when I read articles that make statements about accuracy. Most folks who mention accuracy probably mean relative to a spatial reference system such as NAD83 and NAVD88 of some realization. This is important when you want to fit data from different measurement sessions together. Thus it is critically important to data fusion or temporal analysis operations.

In the example I have provided above, the datum is actually a collection of reference points–the signalized test points. Now this could be useful if I have these points permanently installed such that they cannot be disturbed (virtually impossible for most sites!). What we really need to do is tie our test markers to a known datum. In the example above, we did this with a Global Navigation Satellite System (GNSS) base station using real time kinematic (RTK) positioning. We tie the base station location to a reference datum via a National Geodetic Survey (NGS) service called the Online Positioning User Service (OPUS). An OPUS extract for this project under discussion is depicted in Figure 3.

This particular report provides an assessment of horizontal accuracy relative to NAD_83 (2011, EPOCH:2010.0000) and vertical accuracy relative to NAVD88 (GEOID12B). This project day was pretty spectacular in terms of base station accuracy!

In addition to the error in the base station, we have the error of the RTK system itself as well as placement of the rover pole (antenna plumb pole) over each target. For scientific studies, one would do what is called a network adjustment on the test targets. If the distance between the base and rover becomes large (say kilometers), this error can become dominant. For our project it was about 0.3 cm horizontal and 0.5 cm vertical.

All of these errors must be combined to arrive at the overall project accuracy. If we simply add error contributions, we obtain (all in cm) 1.3 (target) + 0.3 (RTK) + 0.9 (Base) = 2.5 cm Horizontal and 1.6 (target) + 0.5 (RTK) + 3.1 (Base) = 5.2 cm Vertical. This is just a back of the envelope approximation since I really need to look at error propagation for each individual target from the base. One can also make a strong argument that the errors are independent and thus could be added in quadrature. However, I think you can see the point–all errors from the datum to the point of final measurement must be taken in to account.

The net issue for my example project is that a nave approach would report horizontal error as about 1.3 cm and vertical error as about 1.6 cm for this project. They are, in fact, more on the order of 2.5 horizontal and 5.0 vertical.

Does this matter? Yes, it matters a tremendous amount in very high accuracy work. There is a rule of thumb that says control and test points need to be about 3 times more accurate than the desired accuracy of the project. However, you can clearly see in my example that the error is dominated by our ability to accurately measure the initial reference station (the base)! The dominance of the base station error shows you why construction projects often use local reference systems–it eliminates this large error component.

The bottom line is this. When someone makes a statement about accuracy, ask some in-depth questions:
What is the datum for horizontal and vertical?
Can you provide a diagram of the measurement chain from reference datum to final model measurements?
What was the statistical accuracy of each measurement in the chain?
What are the computations used in the error propagation?

If these questions cannot be clearly answered, you should be very suspicious of the accuracy claims.

Lewis Graham is the President and CTO of GeoCue Corporation. GeoCue is North America’s largest supplier of LIDAR production and workflow tools and consulting services for airborne and mobile laser scanning.

A 651Kb PDF of this article as it appeared in the magazine complete with images is available by clicking HERE