Overview of the ASPRS Positional Accuracy Standards for Digital Geospatial Data

Edition 2, Version 2 (2024)

Editor’s note: For a PDF that includes all the images and tables click HERE

ASPRS journey in standards development

The global geospatial community relies on the American Society for Photogrammetry and Remote Sensing (ASPRS) when it comes to education and standardization. Since the early 1980s, ASPRS championed the development of accuracy standards for geospatial data. Early versions, including the legacy standards of 1990, were designed for the map-making practices of that era and characterized by paper-based maps. In 2014, ASPRS published the new Positional Accuracy Standards for Digital Geospatial Data, which were developed for the new digital era of mapping practices. These reflected the vast experience gained from decades of mapping practices and industry use of legacy ASPRS standards. Challenges arose, however, as past experiences were based on older practices and the attendant technology of geospatial data production, which may or may not apply to today’s digital sensors, such as lidar and digital cameras. This paper will provide users of the new standards—specifically Edition 2, Version 2 (published on June 24, 2024)—with the necessary details to better understand and apply new accuracy standards in their day-to-day activities.

Design philosophy and the new paradigm

The new standards are intended to be broadly based, technologically independent, and applicable to most common mapping applications and projects. They were developed to embrace the new era of geospatial data acquisition technologies and processing methods. This new direction became apparent when we moved to digital sensors (e.g., lidar and digital cameras) and the resultant digital workflow required to process the acquired digital data. The introduction of digital sensors to our industry put an end to the old concepts of producing and representing map content. The previous era of geospatial data production dictated the use of paper as the only medium to present mapping data and the use of map scale and contour interval as measures to represent map accuracy. These legacy accuracy measures were based on the sensor’s configuration and other acquisition parameters, such as flying altitude and base-to-height ratio (B/H ratio).

This approach worked for that era because the film camera was the only sensor used to collect data for geospatial data production. Film cameras had a design based on a film format of 220 mm x 220 mm (9 inches x 9 inches) and 150-mm (6-inch) lens focal length. The unique geometrical design made it easy to estimate product accuracy based on flight parameters. Today’s digital cameras come with various designs that make it difficult to relate resulting accuracy to the flight parameters. Today’s digital geospatial data workflow eliminates the use of these old accuracy measures. The new ASPRS standards were designed to be sensor-agnostic and data-driven. The new paradigm is founded on the fact that geospatial data users should not worry about data acquisition hardware, as it is rapidly changing in response to advances in sensor technologies. Moreover, users should be concerned only about the accuracy of the products they receive and be able to specify product accuracy to suit their project needs. This is what shaped the design philosophy of the new standards. It offers users unlimited accuracy levels without sensor or hardware limitations.

These standards are intended to be a living document to be updated in future editions to reflect changing technologies and user needs.

Accuracy explained

Historically, geospatial accuracy takes two forms. Firstly, “absolute” accuracy quantifies how close the measured position on a map or in a dataset is to the true physical position, as represented in a reference datum. The other type of accuracy quantifies the internal data quality to express how points within the data relate to each other. Older versions of the ASPRS standards called the latter “relative” accuracy. However, the latest version of these standards changed the term to “data internal precision,” as there is wide belief that such measures of data quality do not fall under data accuracy. Accordingly, all references to “accuracy” in the new ASPRS standards and this paper refer to absolute accuracy.

Adopted statistical measures

The new standards embrace the use of the root mean square error (RMSE) as the only accuracy measure. This is a departure from the earlier version of the standards, Edition 1, where both RMSE and 95% confidence level were used to express product accuracy. The main reason behind this change is to eliminate user confusion experienced since the release of Edition 1 of the standards. Experience showed that only users versed in the probability and statistical theories understood that accuracy expressed in both RMSE and 95% confidence level were the same, the only difference being the confidence levels assigned with each statistical term.

To help readers understand this argument, I would like to describe the differences and similarities in these accuracy terms using the funnel approach. In Figure 1, the colored balls represent the errors resulting from an accuracy assessment session using independent checkpoints. The varying ball diameters represent the different values of errors found for each of the checkpoints. The spout diameter of the funnel represents the maximum error value that each of the statistical terms (50%, 90%, 95%, and 97.73%) allows. In Figure 1, the largest error allowed is by funnel D, which represents the 97.73% confidence level, while funnel A, which represents a confidence level of 50%, allows the smallest error value of 6.74 cm. If such numbers are presented to an end-user of geospatial data who is unfamiliar with these statistical terms, and you ask the user which accuracy term they prefer, most likely they would choose the smallest number of 6.74 cm, which is represented by funnel A or the 50% confidence level. This choice makes sense for users who prefer the highest accuracy level they can get for their received products.

If we pose a similar question to those on the product production side, they most likely will choose funnel D, thinking that the larger accuracy number of 30 cm will give some leeway during production. However, both choices based on the accuracy number are wrong, as both the 6.74 cm and 30 cm numbers represent the same accuracy level. Although the 6.74 cm accuracy figure associated with the 50% confidence level is a tight number, only 50% of the balls need to pass through the narrow spout of the funnel. In other words, only 50% of the checkpoints must show an error of 6.74 cm or less. Similarly, 30 cm may look like a looser accuracy figure, but it requires that 97.73% of these balls need to pass through the wide spout of the funnel. In other words, 97.73% of the checkpoints must have an error of no larger than 30 cm. As you may notice, it becomes very confusing for the layperson to notice and understand all these details. That is why we removed the 95% confidence level—it offers no additional benefits over RMSE, while causing considerable confusion.

The 3D accuracy approach

The new standards introduce yet another accuracy term for the new era of engineering and geospatial needs—three-dimensional accuracy. When considering the fast pace of development in the field of digital twins, smart cities, and other applications that require three-dimensional representation of features, we wanted to offer a way to measure feature accuracy within a three-dimensional model. Currently, we estimate horizontal and vertical accuracy separately, which is helpful in describing the accuracy of 3D models. However, it is not as efficient for representing accuracy in the native 3D environment.

Horizontal positional accuracy standard for geospatial data

Horizontal accuracy is meant for products that live in a two-dimensional space, such as a planimetric map or an ortho map. In practice, geospatial data users pay less attention to feature vertical accuracy in products such as flat maps, because there is no way to measure the height or model the vertical accuracy. The new standards offer a simple yet comprehensive approach for horizontal accuracy. They offer unlimited horizontal accuracy classes to suit any geospatial product and make it useful over time regardless of changes in future technologies or practices.

Table 1 presents the horizontal accuracy standards of the new standards. The accuracy class is determined by the user or by project needs. Once the user specifies that their project requires, for example, an accuracy of 5 cm, that figure becomes the accuracy class according to the new ASPRS standard. Consequently, 5 cm will be interpreted as the absolute horizontal accuracy measured as RMSE. Additionally, the horizontal accuracy standards set an accuracy measure for the mosaic seamlines mismatch. Before the advanced digital image processing tools and efficient matching algorithms, users struggled to stitch images (or frames) together without visible shifts in features, such as roads and buildings, extending over adjacent frames. Because it was impossible to eliminate a mismatch between frames, the industry (and therefore accuracy standards) accepted some mismatch, within a certain tolerance. This tolerance is provided in Table 1.

Today’s image processing is more refined and rarely are users faced with these issues. Although it is uncommon, edge mismatch may still occur in some projects that were either poorly collected or processed, or if an inaccurate digital elevation model (DEM) was used during orthorectification.

To assess the horizontal accuracy for an orthorectified map, for example, a minimum of 30 independent checkpoints clearly visible on the map should be surveyed to an accuracy that suits the expected map accuracy.

Vertical positional accuracy standard for elevation data

Like horizontal accuracy standards, vertical accuracy standards offer a simple but comprehensive approach for all geospatial products (Table 2). Different from horizontal accuracy standards, vertical accuracy standards include two categories for vegetated and non-vegetated terrains. However, the non-vegetated vertical accuracy (NVA) is the one that will be considered when accepting or rejecting data based on the results of the vertical accuracy assessment. The vegetated vertical accuracy (VVA) has no threshold and should be assessed and reported as found, with no weight on accepting or rejecting the data unless there is a different prior agreement reached between the data user and the data producer. If the user specifies a 10-cm vertical accuracy requirement for their product, this will go on the record as a 10-cm vertical accuracy class as NVA with RMSEV = 10 cm.

NVA should be assessed using a minimum of 30 independent checkpoints and up to 120 checkpoints for large projects. The VVA needs a minimum of 30 checkpoints regardless of the project size unless otherwise agreed upon between the data user and the data producer. The vertical accuracy standards also introduce measures for data internal precision such as within-swath data smoothness and vertical shift in data from adjacent swaths. Here you notice that standards refrain from using the term “relative accuracy” and replace it with the new term “data internal precision”, as data smoothness does not fall under accuracy. In lidar, for example, data smoothness is mainly related to the hardware performance and does not follow the theories of statistics and probability like absolute accuracy.

Three-dimensional positional accuracy standard for geospatial data

As mentioned earlier, our industry is heading towards a 3D GIS concept. This is evident in the use of colorized point clouds, 3D models, digital twins, etc. Such a 3D environment requires a suitable new accuracy measure. The introduction of a 3D positional accuracy standard as a new accuracy measure is introduced to meet such needs. Table 3 lists the 3D positional accuracy standard and presents unlimited accuracy classes to suit all application needs.

The one concern to be addressed by the software suppliers is the lack of a commercial viewer for true 3D data visualization and manipulation. The industry needs an application that is easily accessible to all geospatial data users with smooth viewing of the 3D model. Users need an application with a terrain-hugging floating mark or cursor to measure feature position in a true 3D environment. Without such a capability, users currently combine individually assessed vertical and horizontal accuracies to produce 3D accuracy for their products.

Ground controls and products’ accuracy

Surveyed control points play a crucial role in assessing and improving products’ absolute accuracy. Whether used to process lidar or digital imagery, the number and distribution of ground control points are determined by expected product accuracy. There is no single method to determine the number and distribution scheme, as the approach is based on practical experience coupled with user judgement. The general rule, however, is that ground control points and checkpoints should be evenly distributed throughout the project area unless there are natural factors (such as water and heavy vegetation) that may prevent or skew such distribution. As for the quality of the surveyed control points, these standards require that the survey meets specific accuracy criteria to produce the final mapping products from these points. Survey accuracy requirements differ according to mapping product type, i.e., whether it is two-dimensional (ortho map) or three-dimensional (elevation data). The new standards set the following requirements for ground control points for imagery-based products:

  • Ground control for aerial triangulation designed for digital planimetric data (orthoimagery and/or map) only:
  • RMSEH(GCP)≤ ½ *RMSEH(MAP)
  • RMSEV(GCP)≤ RMSEH(MAP)
  • Ground control for aerial triangulation designed for projects that include elevation or 3D products, in addition to digital planimetric data (orthoimagery and/or map):
  • RMSEH(GCP)≤ ½ *RMSEH(MAP)
  • RMSEV(GCP)≤ ½ *RMSEV(DEM)
  • Similarly, the accuracy of the ground control points used for lidar calibration and boresighting should be twice the target accuracy of the final products.
  • RMSEV(GCP)≤ ½ *RMSEV(DEM)
  • Currently, the industry is focusing only on the vertical accuracy of lidar datasets. If a horizontal accuracy measure is required for lidar data, users can adopt the one provided for imagery-based products or:
  • RMSEH(GCP)≤ ½ *RMSEV(DEM)

Accuracy assessment

For projects requiring accuracy testing according to ASPRS standards, perform the testing according to the following understanding:

  • Horizontal accuracy: Compare planimetric coordinates in the data set with those from a more accurate source.
  • Vertical accuracy: Compare surface elevations in the data set with those from a more accurate source, using checkpoints and scientifically sound interpolation methods.
  • Three-dimensional accuracy: Compare the combined X, Y, and Z coordinates in the data set with those from a more accurate source.

An unbiased accuracy assessment is the only way geospatial data users can be certain that the delivered products meet project or application requirements. For the assessment to be unbiased, the following conditions must be satisfied:

  1. The surveyed checkpoints used in the assessment should be independent of the surveyed control points used in the data calibration process, i.e., assessment checkpoints are not used in the imagery aerial triangulation process or the boresighting of lidar data.
  2. The accuracy of the checkpoints should be higher than the expected accuracy of the tested product. According to these standards, the accuracy of the checkpoints should be at least twice as much as the expected accuracy of the tested product.
  3. Checkpoints should be evenly distributed around the project as much as feasible. Terrain and access may affect this distribution, requiring practical judgment to be applied.
  4. A minimum of 30 checkpoints should be used for assessing horizontal accuracy and the NVA for project areas of 1000 km2. Such numbers increase with project size (Table 4).

If the project cannot meet the 30-checkpoint minimum due to small test area (e.g., UAV-based projects) or budget constraints, report accuracy verification with fewer checkpoints, according to section 7.16 of the standards.

As for assessing VVA, the standards recommend a minimum of 30 checkpoints regardless of the project size. Data users and data producers can agree, however, on additional or fewer checkpoints if this suits the project requirements.

The previously recommended number and distribution of NVA and VVA checkpoints may vary according to the significance of different land cover categories and project requirements. The checkpoint numbers suggested in Table 4 are recommendations based on best practices. Data producers and data users may mutually agree to modify such requirements based on anticipated accuracy, project area and scope, terrain challenges, accessibility of the area, and budget constraints.

Accuracy reporting

Horizontal, vertical, and 3D positional accuracies shall be assessed and formally reported according to one of the statements provided in section 7.16 of the standards. In addition to the accuracy class, the following related statistical quantities should be computed and reported:

  • Residual errors at each checkpoint
  • Maximum error
  • Minimum error
  • Mean error
  • Median error
  • Standard deviation
  • RMSE.

The standards differentiate when the accuracy is performed by data users versus data producers.

Accuracy reporting by data users or their consultants

The standards provide specific statements to report the three types of positional accuracies. Such statements are specific to whether the accuracy testing meets the ASPRS standards requirement for 30-checkpoint minimum.

When accuracy testing meets ASPRS standards requirements

Here the testing should be performed using a minimum of 30 checkpoints.

  • Reporting horizontal positional accuracy
    “This data set was tested to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a __(cm) RMSEH Horizontal Positional Accuracy Class. The tested horizontal positional accuracy was found to be RMSEH = __(cm).”
  • Reporting NVA
    “This data set was tested to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a __(cm) RMSEV Vertical Accuracy Class. The Non-Vegetated Vertical Accuracy (NVA) was found to be RMSEV = __(cm).”
  • Reporting VVA
    “This data set was tested to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a __(cm) RMSEV Vertical Accuracy Class. The Vegetated Vertical Accuracy (VVA) was found to be RMSEV = __(cm).”
  • Reporting 3D positional accuracy
    “This data set was tested to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a ___ (cm) RMSE3D Three-Dimensional Positional Accuracy Class. The tested three-dimensional accuracy was found to be RMSE3D = ___(cm) within the NVA tested area and RMSE3D = ___(cm) within the VVA tested area.”1

When accuracy testing does not meet ASPRS standards requirements

The following reporting statements are designed for when testing is performed using fewer than 30 checkpoints. This could be due to the small size of the project or low budget. Many UAV projects fall into this category. Although the standards do not endorse the assessed accuracy performed with fewer than 30 checkpoints, they provide a vehicle to report findings regardless and at the same time encourage truth-in-reporting:

  • Reporting horizontal positional accuracy
    “This data set was tested as required by ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024). Although the Standards call for a minimum of thirty (30) checkpoints, this test was performed using ONLY __ checkpoints. This data set was produced to meet a ___(cm) RMSEH Horizontal Positional Accuracy Class. The tested horizontal positional accuracy was found to be RMSEH = ___(cm) using the reduced number of checkpoints.”
  • Reporting NVA
    “This data set was tested as required by ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024). Although the Standards call for a minimum of thirty (30) checkpoints, this test was performed using ONLY __ checkpoints. This data set was produced to meet a ___(cm) RMSEV Vertical Positional Accuracy Class. The tested vertical positional accuracy was found to be RMSEV = ___(cm) using the reduced number of checkpoints in the NVA tested area.”
  • Reporting VVA
    “This data set was tested as required by ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024). Although the Standards call for a minimum of thirty (30) checkpoints, this test was performed using ONLY __ checkpoints. This data set was produced to meet a ___(cm) RMSEV Vertical Positional Accuracy Class. The tested vertical positional accuracy was found to be RMSEV = ___(cm) using the reduced number of checkpoints in the VVA tested area.”
  • Reporting 3D positional accuracy
    “This data set was tested as required by ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024). Although the Standards call for a minimum of thirty (30) checkpoints, this test was performed using ONLY __ checkpoints. This data set was produced to meet a ___(cm) RMSE3D Three-Dimensional Positional Accuracy Class. The tested three-dimensional positional accuracy was found to be RMSE3D = ___(cm) using the reduced number of checkpoints in the NVA tested area and RMSE3D = ___(cm) using the reduced number of checkpoints in the VVA tested area.”

Accuracy reporting by data producers

Data producers do not usually have access to independent checkpoints and, most of the time, they use the ground controls used in aerial triangulation or lidar boresighting to assess product accuracy. Of course, this practice is a biassed test (and therefore unacceptable) because the checkpoints were used in product calibration. Reporting statements by data producers are much simpler, however, as they do not report the accuracy results. They are merely a declaration of what they promised to produce according to the contract requirements. Data producers rely on their vast experience of producing similar products in the past, assuming they employ mature technologies, and follow the best practices and guidelines through established and documented procedures during project design, data processing, and quality control, as set forth in the addenda to these standards.

  • Reporting horizontal positional accuracy
    “This data set was produced to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a __(cm) RMSEH Horizontal Positional Accuracy Class.”
  • Reporting NVA
    “This data set was produced to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a __(cm) RMSEV Non-Vegetated Vertical Accuracy (NVA) Class.”
  • Reporting VVA
    “This data set was produced to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a __(cm) RMSEV Vegetated Vertical Accuracy (VVA) Class.”
  • Reporting 3D positional accuracy
    “This data set was produced to meet ASPRS Positional Accuracy Standards for Digital Geospatial Data, Edition 2, Version 2 (2024) for a ___ (cm) RMSE3D Three-Dimensional Positional Accuracy Class within the NVA tested area and RMSE3D = ___(cm) within the VVA tested area.”

Horizontal accuracy of elevation data

The topic of horizontal accuracy was rarely dealt with before Edition 1 of the ASPRS standards. Among the main reasons for this lack of focus were:

Horizontal accuracy is difficult to verify in the field: Whether it is from lidar or imagery, a point cloud is a discrete data set with sparse points, which make it difficult to model a ground feature to accurately recognize it in the field and pinpoint its horizontal accuracy within a few centimeters. An example is a lidar data set produced to meet USGS QL1. The nominal post spacing for QL1 is 35 cm, which does not support measuring horizontal features much smaller than 35 cm. As point cloud density increases with the advance of lidar technology, however, this task is becoming more achievable. Fortunately, the situation for a point cloud produced from imagery is different, because there is more control over producing a very high point-cloud density.

Horizontal accuracy was not needed: The previous era of mapping was not focussed on 3D model representation and most applications were designed to produce land contours. In today’s world and with the introduction of new concepts such as digital twin, smart city, autonomous driving, indoor scanning and BIM, knowing how accurate the data is horizontally is crucial for public safety and data performance reasons. The introduction of new 3D accuracy in the ASPRS standards is a testimony to these new applications and requirements.

The new standards offer the following approaches for deriving or estimating horizontal accuracy:

  • For photogrammetrically derived elevation data, adopt the same horizontal accuracy class assigned for planimetric data or digital orthoimagery produced from the same source, based on the same photogrammetric adjustment.
  • For lidar elevation data, the standards provide the following formula for estimating the
    horizontal accuracy:

Where:

  • Flying height above mean terrain is in meters
  • GNSS positional errors are radial, in meters, and can be derived from published manufacturer specifications
  • IMU errors are in angular units and can be derived from published manufacturer specifications.

This formula was crossed-checked with horizontal accuracy computation by two of the main manufacturers of aerial lidar systems and resulted in broad agreement.

The above formula simplifies the error budget in lidar and reflects the main contributors to that error budget. Horizontal error in lidar-derived elevation data is largely a function of the following parameters:

  • Sensor positioning error as derived from the Global Navigation Satellite System (GNSS)
  • Attitude (angular orientation) error as derived from the IMU
  • Flying height above mean terrain.

There are other error sources in the lidar system, such as laser ranging and clock timing, which are ignored by the equation as they contribute minimally to the error budget and are considered negligible when estimating horizontal error. The error caused by laser-beam divergence is also ignored for reasons detailed in section 7.6 of the standards.

The role of control survey accuracy in product accuracy

Edition 2 of the standards introduces a requirement for considering the survey accuracy of ground control and checkpoints when computing product final accuracy. Today’s advances in lidar, digital sensors, and digital analytical modeling enable us to produce highly accurate geospatial products that in some cases exceed the accuracy of the field surveying techniques such as GNSS-based RTK. Incorporating the field surveying accuracy is now crucial in determining the real product accuracy, but was not needed decades ago when sensors and procedures yielded far less accurate products. Therefore, when we’re dealing with products such as DOQQ with an accuracy of 10 m, a few centimeters of error in the checkpoints does not impact the final product accuracy.

The new approach introduced by the ASPRS standards divides the product accuracy into two parts or components. The first component includes RMSEH1 and RMSEV1 error is derived from the product fit to the checkpoints. The second component includes RMSEH2 and RMSEV2, which represent errors associated with the accuracy of the survey of the checkpoints. Both components are needed to compute the product’s final accuracy:

Such requirements make it obligatory for data users and data producers to be acquainted with the field surveying process through their surveyors. In other words, they ultimately will need to know the accuracy of the survey so they can use it in the previous formulae. Experience reveals that many of the field surveying manufacturers do not provide the absolute accuracy figures needed for these formulae. Instead, several produce quality figures representing data internal precision that should not be used in these formulae. Acknowledging such a problem, the standards provided in Table 5 comprise a list of the predicted accuracy for most of the surveying techniques used by the industry today. We hope that the manufacturers of surveying equipment recognize the needs of their clients who want to embrace the new ASPRS standards by coming up with a way to compute the absolute accuracy of the survey.

Vegetated versus non-vegetated accuracy

The new standards introduce an important change to the assessment of accuracy in vegetation. Some vegetated environments are challenging for many aerial data acquisition sensors, such as lidar and imagery. The new standards remove the pass/fail criteria for the VVA and now it needs to be tested and reported according to the requirements outlined in these standards. The logic behind this change is based on the following:

Lidar (and imagery) cannot penetrate dense vegetation perfectly: This problem results in a less dense lidar point cloud under trees. A sparse point cloud results in less favorable modeling of the terrain under trees. Due to this compromised modeling of the terrain, the VVA assessment results in a bad fit of the checkpoints to the lidar point cloud. Figure 2 illustrates the problem in modeling terrain using a sparse point cloud and a dense point cloud. When terrain is modeled with a less dense point cloud, there is a risk of estimating the wrong elevation for the desired location, such as point A of the top profile of Figure 2. The software used most likely creates a triangulated irregular network (TIN), whereby connections between points of the point cloud form triangles. Software reports terrain elevation at a certain location based on linear interpolation inside the triangle within which the location falls.

As depicted in Figure 2, due to the sparse point cloud around point A, its elevation could be estimated with an error of 2 m. Point A could be one of the checkpoints surveyed under trees to assess VVA. When this happens, the derived VVA cannot be trusted. The only way to prevent such errors is by having a smooth continuous model to represent the terrain, which can be guaranteed only by having a dense point cloud to model the terrain accurately, as illustrated in the lower surface of Figure 2. More details on this topic can be found in section D of addendum I of the standards.

Surveying under trees is not reliable: GPS signals and PDOP are disturbed under dense canopies, resulting in inaccurate surveys.

Field survey measures the actual ground: The survey team usually measures the elevation of the actual ground, while the lidar point cloud measures the tops of the leaves, debris, and grass overlaying the ground. Such discrepancies in the measured elevations undermine the assessed VVA.

The forest floor is dynamic in nature: Forest floor debris moves with wind, water runoff, and animals disturbing the soil. In addition to the error vegetation already introduces, it changes in height and shape over time, which can pose serious problems, especially if the field ground survey is not performed at the same time as the airborne survey.

The advanced sensor technology on the market produces highly accurate point clouds: It is therefore appropriate to base data acceptance or rejection on the accuracy of the data over bare earth, where the ground is not obscured from the sensor. This was done for decades in photogrammetry, when under-tree area contours were drawn as dash contours to indicate a low-confidence area where accuracy was not guaranteed.

The power of the six addenda

For the first time, ASPRS standards contain best practices and guidelines for use. The information included in these addenda is not easily found in a textbook or a technical paper. It is a collection of science and practical experience authored by professionals with decades of surveying and mapping practice. The following is a brief description of these addenda:

Addendum I: General Best Practices and Guidelines

This addendum provides information on the following topics:

  • Reporting notes for delivered geospatial products
  • Error normality testing and reporting
  • Understanding accuracy statistics and errors mitigation
  • Lidar data quality versus positional accuracy
  • Lidar system classification and grouping.

Addendum II: Best Practices and Guidelines for Field Surveying for Ground Control Points and Checkpoints

This addendum is a valuable addition which details everything users need to know about conducting safe and successful field surveys. No person should start a survey in the field for projects that must meet ASPRS standards without first consulting this addendum.

Addendum III: Best Practices and Guidelines for Mapping with Photogrammetry

This addendum walks users through all aspects of photogrammetric mapping, from planning to aerial data collection, production and accuracy assessment. It is a valuable resource for practitioners as well as those just starting their careers in photogrammetric mapping.

Addendum IV: Best Practices and Guidelines for Mapping with Lidar

Lidar is becoming the backbone of our industry and the money-maker for almost all mapping businesses. This addendum provides information similar to that provided in addendum III for photogrammetric mapping but with a focus on lidar, lidar sensors, and operations.

Addendum V: Best Practices and Guidelines for Mapping with Unmanned Aerial Systems (UAS)

While UAS is taking our industry and other aspects of life by storm, this addendum provides everything needed to create a successful production line for UAS operations. It contains two sections—one focused on photogrammetric operations and production and the other on UAS-based lidar operations and production.

Addendum VI: Best Practices and Guidelines for Mapping with Oblique Imagery

The market lacks good information about best practices in oblique imagery operations. That was the motive behind drafting this addendum, which contains information about acquisition and production of oblique imagery that is difficult to find anywhere else.

Acknowledgments

The author and ASPRS deeply appreciate the many volunteers who dedicated two years of their time to create Edition 2, Version 2. Their names are listed on the back of the published standards. We are forever grateful for their efforts and generosity.

This article will be published concurrently in Photogrammetric Engineering & Remote Sensing and LIDAR Magazine. 2


  1. 1 The 3D positional accuracy in vegetated areas can be omitted from this report based on a mutual agreement between the data user and the data producer.
About the Author

Dr. Qassim A. Abdullah

Woolpert Vice President and Chief Scientist Qassim Abdullah, PhD, PLS, CP, has more than 45 years of combined industrial, R&D, and academic experience in analytical photogrammetry, digital remote sensing, and civil and surveying engineering. When he’s not presenting at geospatial conferences around the world, Abdullah teaches photogrammetry and remote sensing courses at the University of Maryland and Penn State, authors a monthly column for the ASPRS journal Photogrammetric Engineering & Remote Sensing, sits on NOAA’s Hydrographic Services Review Panel, and mentors R&D activities within Woolpert and around the world.